Employee Testimonies

We have collected public allegations from former employees that reveal concerning patterns about the organization's internal culture, Altman's personal integrity and aptitude to lead OpenAI, and the risks that lie ahead. We also catalogue the slow depletion of safety and policy talent that has impacted OpenAI across multiple mass exoduses in the past five years.

Employee Testimonies

We have collected public allegations from former employees that reveal concerning patterns about the organization's internal culture, Altman's personal integrity and aptitude to lead OpenAI, and the risks that lie ahead. We also catalogue the slow depletion of safety and policy talent that has impacted OpenAI across multiple mass exoduses in the past five years.

Employee Testimonies

We have collected public allegations from former employees that reveal concerning patterns about the organization's internal culture, Altman's personal integrity and aptitude to lead OpenAI, and the risks that lie ahead. We also catalogue the slow depletion of safety and policy talent that has impacted OpenAI across multiple mass exoduses in the past five years.

Image for Nonprofit Board Member

May 18, 2024

Vox

“I lost trust”: Why the OpenAI team in charge of safeguarding humanity imploded

Company insiders explain why safety-conscious employees are leaving.

Sigal Samuel

Read Full Article

Image for Nonprofit Board Member

May 18, 2024

Vox

“I lost trust”: Why the OpenAI team in charge of safeguarding humanity imploded

Company insiders explain why safety-conscious employees are leaving.

Sigal Samuel

Read Full Article

Image for Nonprofit Board Member

May 18, 2024

Vox

“I lost trust”: Why the OpenAI team in charge of safeguarding humanity imploded

Company insiders explain why safety-conscious employees are leaving.

Sigal Samuel

Read Full Article

Regarding Altman

Ilya Sutskever

Co-founder, co-lead of Superalignment Team

Departed early 2024

“I don’t think Sam is the guy who should have the finger on the button for AGI.”

May 15, 2025

The Atlantic

View Source

Ilya Sutskever

Co-founder, co-lead of Superalignment Team

Departed early 2024

“I don’t think Sam is the guy who should have the finger on the button for AGI.”

May 15, 2025

The Atlantic

View Source

Ilya Sutskever

Co-founder, co-lead of Superalignment Team

Departed early 2024

“I don’t think Sam is the guy who should have the finger on the button for AGI.”

May 15, 2025

The Atlantic

View Source

Ilya Sutskever

Co-founder, co-lead of Superalignment Team

Departed early 2024

From “The Optimist”: “It had taken Sutskever years to be able to put his finger on Altman’s pattern of behavior—how OpenAI’s CEO would tell him one thing, then say another and act as if the difference was an accident. ‘Oh, I must have misspoken,’ Altman would say. Sutskever felt that Altman was dishonest and causing chaos, which would be a problem for any CEO, but especially for one in charge of such potentially civilization-altering technology.”

May 20, 2025

The Optimist by Keach Hagey

View Source

Ilya Sutskever

Co-founder, co-lead of Superalignment Team

Departed early 2024

From “The Optimist”: “It had taken Sutskever years to be able to put his finger on Altman’s pattern of behavior—how OpenAI’s CEO would tell him one thing, then say another and act as if the difference was an accident. ‘Oh, I must have misspoken,’ Altman would say. Sutskever felt that Altman was dishonest and causing chaos, which would be a problem for any CEO, but especially for one in charge of such potentially civilization-altering technology.”

May 20, 2025

The Optimist by Keach Hagey

View Source

Ilya Sutskever

Co-founder, co-lead of Superalignment Team

Departed early 2024

From “The Optimist”: “It had taken Sutskever years to be able to put his finger on Altman’s pattern of behavior—how OpenAI’s CEO would tell him one thing, then say another and act as if the difference was an accident. ‘Oh, I must have misspoken,’ Altman would say. Sutskever felt that Altman was dishonest and causing chaos, which would be a problem for any CEO, but especially for one in charge of such potentially civilization-altering technology.”

May 20, 2025

The Optimist by Keach Hagey

View Source

Mira Murati

Chief Technical Officer

Departed late 2024

“I don’t feel comfortable about Sam leading us to AGI.”

May 15, 2025

The Atlantic

View Source

Mira Murati

Chief Technical Officer

Departed late 2024

“I don’t feel comfortable about Sam leading us to AGI.”

May 15, 2025

The Atlantic

View Source

Mira Murati

Chief Technical Officer

Departed late 2024

“I don’t feel comfortable about Sam leading us to AGI.”

May 15, 2025

The Atlantic

View Source

Mira Murati

Chief Technical Officer

Departed late 2024

From “The Optimist”: “Murati described how what she saw as Altman’s toxic style of running the company had been causing problems for years, especially when his anxiety flared up, such as in recent months. In her experience, Altman had a simple playbook: first, say whatever he needed to say to get you to do what he wanted, and second, if that didn’t work, undermine you or destroy your credibility. Altman told people what they wanted to hear, even if it meant promising two competing people the same job or giving Microsoft some ground on a negotiation that she had spent months trying to gain … Murati had given Altman feedback on all these points in the past, she said. Altman had responded by bringing the company’s head of HR, Diane Yoon, to their one-on-ones for weeks until she finally told him that she didn’t intend to share her feedback with the board.”

May 20, 2025

The Optimist by Keach Hagey

View Source

Mira Murati

Chief Technical Officer

Departed late 2024

From “The Optimist”: “Murati described how what she saw as Altman’s toxic style of running the company had been causing problems for years, especially when his anxiety flared up, such as in recent months. In her experience, Altman had a simple playbook: first, say whatever he needed to say to get you to do what he wanted, and second, if that didn’t work, undermine you or destroy your credibility. Altman told people what they wanted to hear, even if it meant promising two competing people the same job or giving Microsoft some ground on a negotiation that she had spent months trying to gain … Murati had given Altman feedback on all these points in the past, she said. Altman had responded by bringing the company’s head of HR, Diane Yoon, to their one-on-ones for weeks until she finally told him that she didn’t intend to share her feedback with the board.”

May 20, 2025

The Optimist by Keach Hagey

View Source

Mira Murati

Chief Technical Officer

Departed late 2024

From “The Optimist”: “Murati described how what she saw as Altman’s toxic style of running the company had been causing problems for years, especially when his anxiety flared up, such as in recent months. In her experience, Altman had a simple playbook: first, say whatever he needed to say to get you to do what he wanted, and second, if that didn’t work, undermine you or destroy your credibility. Altman told people what they wanted to hear, even if it meant promising two competing people the same job or giving Microsoft some ground on a negotiation that she had spent months trying to gain … Murati had given Altman feedback on all these points in the past, she said. Altman had responded by bringing the company’s head of HR, Diane Yoon, to their one-on-ones for weeks until she finally told him that she didn’t intend to share her feedback with the board.”

May 20, 2025

The Optimist by Keach Hagey

View Source

Geoffrey Irving

Member of Technical Staff

Departed late 2019

“My prior is strongly against Sam after working for him for two years at OpenAI: 1. He was always nice to me. 2. He lied to me on various occasions 3. He was deceptive, manipulative, and worse to others, including my close friends (again, only nice to me, for reasons)”

November 20, 2023

Twitter/X

View Source

Geoffrey Irving

Member of Technical Staff

Departed late 2019

“My prior is strongly against Sam after working for him for two years at OpenAI: 1. He was always nice to me. 2. He lied to me on various occasions 3. He was deceptive, manipulative, and worse to others, including my close friends (again, only nice to me, for reasons)”

November 20, 2023

Twitter/X

View Source

Geoffrey Irving

Member of Technical Staff

Departed late 2019

“My prior is strongly against Sam after working for him for two years at OpenAI: 1. He was always nice to me. 2. He lied to me on various occasions 3. He was deceptive, manipulative, and worse to others, including my close friends (again, only nice to me, for reasons)”

November 20, 2023

Twitter/X

View Source

Scott Aaronson

Safety Researcher

Departed mid 2024

“One of the big ironies here is that if you want to hold Sam Altman to account, for example, you don't have to say anything that Sam Altman himself wasn't saying five or six years ago.”

December 4, 2024

Win Win Podcast

View Source

Scott Aaronson

Safety Researcher

Departed mid 2024

“One of the big ironies here is that if you want to hold Sam Altman to account, for example, you don't have to say anything that Sam Altman himself wasn't saying five or six years ago.”

December 4, 2024

Win Win Podcast

View Source

Scott Aaronson

Safety Researcher

Departed mid 2024

“One of the big ironies here is that if you want to hold Sam Altman to account, for example, you don't have to say anything that Sam Altman himself wasn't saying five or six years ago.”

December 4, 2024

Win Win Podcast

View Source

Scott Aaronson

Safety Researcher

Departed mid 2024

“It now looks to many people like the previous board has been 100% vindicated in its fear that Sam did, indeed, plan to move OpenAI far away from the nonprofit mission with which it was founded … I kinda liked the previous mission, as well as the expressed beliefs of the previous Sam Altman!”

October 1, 2024

Personal Blog

View Source

Scott Aaronson

Safety Researcher

Departed mid 2024

“It now looks to many people like the previous board has been 100% vindicated in its fear that Sam did, indeed, plan to move OpenAI far away from the nonprofit mission with which it was founded … I kinda liked the previous mission, as well as the expressed beliefs of the previous Sam Altman!”

October 1, 2024

Personal Blog

View Source

Scott Aaronson

Safety Researcher

Departed mid 2024

“It now looks to many people like the previous board has been 100% vindicated in its fear that Sam did, indeed, plan to move OpenAI far away from the nonprofit mission with which it was founded … I kinda liked the previous mission, as well as the expressed beliefs of the previous Sam Altman!”

October 1, 2024

Personal Blog

View Source

Tasha McCauley

Member of the Board

Departed late 2023

“We also feel that developments since [Altman] returned to the company — including his reinstatement to the board and the departure of senior safety-focused talent — bode ill for the OpenAI experiment in self-governance.”

May 26, 2024

The Economist

View Source

Tasha McCauley

Member of the Board

Departed late 2023

“We also feel that developments since [Altman] returned to the company — including his reinstatement to the board and the departure of senior safety-focused talent — bode ill for the OpenAI experiment in self-governance.”

May 26, 2024

The Economist

View Source

Tasha McCauley

Member of the Board

Departed late 2023

“We also feel that developments since [Altman] returned to the company — including his reinstatement to the board and the departure of senior safety-focused talent — bode ill for the OpenAI experiment in self-governance.”

May 26, 2024

The Economist

View Source

Tasha McCauley

Member of the Board

Departed late 2023

“Accountability is important in any company, but it is paramount when building a technology as potentially world-changing as AGI … As we told the investigators, deception, manipulation, and resistance to thorough oversight should be unacceptable.”

March 8, 2024

Twitter/X

View Source

Tasha McCauley

Member of the Board

Departed late 2023

“Accountability is important in any company, but it is paramount when building a technology as potentially world-changing as AGI … As we told the investigators, deception, manipulation, and resistance to thorough oversight should be unacceptable.”

March 8, 2024

Twitter/X

View Source

Tasha McCauley

Member of the Board

Departed late 2023

“Accountability is important in any company, but it is paramount when building a technology as potentially world-changing as AGI … As we told the investigators, deception, manipulation, and resistance to thorough oversight should be unacceptable.”

March 8, 2024

Twitter/X

View Source

Todor Markov

Member of Technical Staff, Safety Systems and Preparedness

Departed early 2024

“This course of events led me to believe that CEO Sam Altman was a person of low integrity who had directly lied to employees about the extent of his knowledge and involvement in OpenAI’s practices of forcing departing employees to sign lifetime non-disparagement agreements; and that he was very likely lying to employees about a number of other important topics, including but not limited to the sincerity of OpenAI’s commitment to the Charter — which had up to that point been considered binding and taken very seriously internally by me and other OpenAI employees.”

April 11, 2025

Northern District Court of California

View Source

Todor Markov

Member of Technical Staff, Safety Systems and Preparedness

Departed early 2024

“This course of events led me to believe that CEO Sam Altman was a person of low integrity who had directly lied to employees about the extent of his knowledge and involvement in OpenAI’s practices of forcing departing employees to sign lifetime non-disparagement agreements; and that he was very likely lying to employees about a number of other important topics, including but not limited to the sincerity of OpenAI’s commitment to the Charter — which had up to that point been considered binding and taken very seriously internally by me and other OpenAI employees.”

April 11, 2025

Northern District Court of California

View Source

Todor Markov

Member of Technical Staff, Safety Systems and Preparedness

Departed early 2024

“This course of events led me to believe that CEO Sam Altman was a person of low integrity who had directly lied to employees about the extent of his knowledge and involvement in OpenAI’s practices of forcing departing employees to sign lifetime non-disparagement agreements; and that he was very likely lying to employees about a number of other important topics, including but not limited to the sincerity of OpenAI’s commitment to the Charter — which had up to that point been considered binding and taken very seriously internally by me and other OpenAI employees.”

April 11, 2025

Northern District Court of California

View Source

Todor Markov

Member of Technical Staff, Safety Systems and Preparedness

Departed early 2024

From The New York Times: “[Markov] said OpenAI’s leadership had repeatedly misled employees about the issue. Because of this, he argued, the company’s leadership could not be trusted to build A.G.I. — an echo of what the company’s board had said when it fired Mr. Altman. ‘You often talk about our responsibility to develop A.G.I. safely and to distribute the benefits broadly' he wrote. ‘How do you expect to be trusted with that responsibility?’”

September 3, 2024

The New York Times

View Source

Todor Markov

Member of Technical Staff, Safety Systems and Preparedness

Departed early 2024

From The New York Times: “[Markov] said OpenAI’s leadership had repeatedly misled employees about the issue. Because of this, he argued, the company’s leadership could not be trusted to build A.G.I. — an echo of what the company’s board had said when it fired Mr. Altman. ‘You often talk about our responsibility to develop A.G.I. safely and to distribute the benefits broadly' he wrote. ‘How do you expect to be trusted with that responsibility?’”

September 3, 2024

The New York Times

View Source

Todor Markov

Member of Technical Staff, Safety Systems and Preparedness

Departed early 2024

From The New York Times: “[Markov] said OpenAI’s leadership had repeatedly misled employees about the issue. Because of this, he argued, the company’s leadership could not be trusted to build A.G.I. — an echo of what the company’s board had said when it fired Mr. Altman. ‘You often talk about our responsibility to develop A.G.I. safely and to distribute the benefits broadly' he wrote. ‘How do you expect to be trusted with that responsibility?’”

September 3, 2024

The New York Times

View Source

Dario and Daniela Amodei

VP of Research (Dario), VP of Safety & Policy (Daniela)

Departed late 2020

From "Empire of AI:" “To people around them, the Amodei siblings would describe Altman’s tactics as “gaslighting” and “psychological abuse.”

May 20, 2025

Empire of AI by Karen Hao

View Source

Dario and Daniela Amodei

VP of Research (Dario), VP of Safety & Policy (Daniela)

Departed late 2020

From "Empire of AI:" “To people around them, the Amodei siblings would describe Altman’s tactics as “gaslighting” and “psychological abuse.”

May 20, 2025

Empire of AI by Karen Hao

View Source

Dario and Daniela Amodei

VP of Research (Dario), VP of Safety & Policy (Daniela)

Departed late 2020

From "Empire of AI:" “To people around them, the Amodei siblings would describe Altman’s tactics as “gaslighting” and “psychological abuse.”

May 20, 2025

Empire of AI by Karen Hao

View Source

Regarding OpenAI's Restructuring

Nisan Stiennon

Member of Technical Staff

Departed late 2020

“OpenAI may one day build technology that could get us all killed. It is to OpenAI’s credit that it’s controlled by a nonprofit with a duty to humanity. This duty precludes giving up that control.”

April 23, 2025

CNBC

View Source

Nisan Stiennon

Member of Technical Staff

Departed late 2020

“OpenAI may one day build technology that could get us all killed. It is to OpenAI’s credit that it’s controlled by a nonprofit with a duty to humanity. This duty precludes giving up that control.”

April 23, 2025

CNBC

View Source

Nisan Stiennon

Member of Technical Staff

Departed late 2020

“OpenAI may one day build technology that could get us all killed. It is to OpenAI’s credit that it’s controlled by a nonprofit with a duty to humanity. This duty precludes giving up that control.”

April 23, 2025

CNBC

View Source

Gretchen Krueger

Policy Researcher

Departed early 2024

“OpenAI’s non-profit governance and profit cap are part of why I joined in 2019. If they are removed, it’s as if they never existed—perhaps worse. A (perceived) mission lock can give a false sense of assurance.”

September 29, 2024

Twitter/X

View Source

Gretchen Krueger

Policy Researcher

Departed early 2024

“OpenAI’s non-profit governance and profit cap are part of why I joined in 2019. If they are removed, it’s as if they never existed—perhaps worse. A (perceived) mission lock can give a false sense of assurance.”

September 29, 2024

Twitter/X

View Source

Gretchen Krueger

Policy Researcher

Departed early 2024

“OpenAI’s non-profit governance and profit cap are part of why I joined in 2019. If they are removed, it’s as if they never existed—perhaps worse. A (perceived) mission lock can give a false sense of assurance.”

September 29, 2024

Twitter/X

View Source

Jeffrey Wu

Member of Technical Staff

Departed mid 2024

“We can say goodbye to the original version of OpenAI that wanted to be unconstrained by financial obligations … It seems to me the original nonprofit has been disempowered and had its mission reinterpreted to be fully aligned with profit.”

September 26, 2024

Vox

View Source

Jeffrey Wu

Member of Technical Staff

Departed mid 2024

“We can say goodbye to the original version of OpenAI that wanted to be unconstrained by financial obligations … It seems to me the original nonprofit has been disempowered and had its mission reinterpreted to be fully aligned with profit.”

September 26, 2024

Vox

View Source

Jeffrey Wu

Member of Technical Staff

Departed mid 2024

“We can say goodbye to the original version of OpenAI that wanted to be unconstrained by financial obligations … It seems to me the original nonprofit has been disempowered and had its mission reinterpreted to be fully aligned with profit.”

September 26, 2024

Vox

View Source

Page Hedley

Policy and Ethics Advisor

Departed early 2019

“All of these things—the legally enforceable primacy of the charitable purpose, the profit caps, and who actually owns and controls AGI—are in jeopardy under this restructuring.”

May 22, 2025

Lawfare

View Source

Page Hedley

Policy and Ethics Advisor

Departed early 2019

“All of these things—the legally enforceable primacy of the charitable purpose, the profit caps, and who actually owns and controls AGI—are in jeopardy under this restructuring.”

May 22, 2025

Lawfare

View Source

Page Hedley

Policy and Ethics Advisor

Departed early 2019

“All of these things—the legally enforceable primacy of the charitable purpose, the profit caps, and who actually owns and controls AGI—are in jeopardy under this restructuring.”

May 22, 2025

Lawfare

View Source

Jacob Hilton

Researcher

Departed early 2023

“Throughout multiple announcements and court filings, OpenAI has failed to address the issue of control, which currently belongs to the non-profit. The only reasonable interpretation is that they will take this asset from the non-profit if they can get away with it.”

April 14, 2025

Twitter/X

View Source

Jacob Hilton

Researcher

Departed early 2023

“Throughout multiple announcements and court filings, OpenAI has failed to address the issue of control, which currently belongs to the non-profit. The only reasonable interpretation is that they will take this asset from the non-profit if they can get away with it.”

April 14, 2025

Twitter/X

View Source

Jacob Hilton

Researcher

Departed early 2023

“Throughout multiple announcements and court filings, OpenAI has failed to address the issue of control, which currently belongs to the non-profit. The only reasonable interpretation is that they will take this asset from the non-profit if they can get away with it.”

April 14, 2025

Twitter/X

View Source

Steven Adler

Research and Programs

Departed late 2024

“OpenAI voluntarily chose to start as a nonprofit and to take on various constraints, in return for garnering various related advantages; it shouldn't now get to renege on those constraints.”

April 24, 2025

Twitter/X

View Source

Steven Adler

Research and Programs

Departed late 2024

“OpenAI voluntarily chose to start as a nonprofit and to take on various constraints, in return for garnering various related advantages; it shouldn't now get to renege on those constraints.”

April 24, 2025

Twitter/X

View Source

Steven Adler

Research and Programs

Departed late 2024

“OpenAI voluntarily chose to start as a nonprofit and to take on various constraints, in return for garnering various related advantages; it shouldn't now get to renege on those constraints.”

April 24, 2025

Twitter/X

View Source

Page Hedley

Policy and Ethics Advisor

Departed early 2019

“I think that in the new structure that OpenAI wants, the incentives to rush to make those decisions [to cut corners on safety testing] will go up and there will no longer be anybody really who can tell them not to, tell them this is not OK.”

April 23, 2025

Associated Press

View Source

Page Hedley

Policy and Ethics Advisor

Departed early 2019

“I think that in the new structure that OpenAI wants, the incentives to rush to make those decisions [to cut corners on safety testing] will go up and there will no longer be anybody really who can tell them not to, tell them this is not OK.”

April 23, 2025

Associated Press

View Source

Page Hedley

Policy and Ethics Advisor

Departed early 2019

“I think that in the new structure that OpenAI wants, the incentives to rush to make those decisions [to cut corners on safety testing] will go up and there will no longer be anybody really who can tell them not to, tell them this is not OK.”

April 23, 2025

Associated Press

View Source

Todor Markov

Member of Technical Staff, Safety Systems and Preparedness

Departed early 2024

“We worked at OpenAI; we know the promises it was founded on and we’re worried that in the conversion those promises will be broken. The nonprofit needs to retain control of the for-profit. This has nothing to do with Elon Musk and everything to do with the public interest.”

April 11, 2025

Twitter/X

View Source

Todor Markov

Member of Technical Staff, Safety Systems and Preparedness

Departed early 2024

“We worked at OpenAI; we know the promises it was founded on and we’re worried that in the conversion those promises will be broken. The nonprofit needs to retain control of the for-profit. This has nothing to do with Elon Musk and everything to do with the public interest.”

April 11, 2025

Twitter/X

View Source

Todor Markov

Member of Technical Staff, Safety Systems and Preparedness

Departed early 2024

“We worked at OpenAI; we know the promises it was founded on and we’re worried that in the conversion those promises will be broken. The nonprofit needs to retain control of the for-profit. This has nothing to do with Elon Musk and everything to do with the public interest.”

April 11, 2025

Twitter/X

View Source

Anish Tondwalkar

Member of Technical Staff

Departed 2024

“If OpenAI is allowed to become a for-profit, these safeguards, and OpenAI’s duty to the public can vanish overnight.”

April 23, 2025

Fast Company

View Source

Anish Tondwalkar

Member of Technical Staff

Departed 2024

“If OpenAI is allowed to become a for-profit, these safeguards, and OpenAI’s duty to the public can vanish overnight.”

April 23, 2025

Fast Company

View Source

Anish Tondwalkar

Member of Technical Staff

Departed 2024

“If OpenAI is allowed to become a for-profit, these safeguards, and OpenAI’s duty to the public can vanish overnight.”

April 23, 2025

Fast Company

View Source

Steven Adler

Research and Programs

Departed late 2024

"OpenAI promised nonprofit control over this incredibly significant for-profit entity that it was building. [...] I'm pretty concerned about giving up the nonprofit's control, and it's not clear to me that there is a reasonable price that could be paid to adequately compensate for it. "

May 8, 2025

The Cognitive Revolution

View Source

Steven Adler

Research and Programs

Departed late 2024

"OpenAI promised nonprofit control over this incredibly significant for-profit entity that it was building. [...] I'm pretty concerned about giving up the nonprofit's control, and it's not clear to me that there is a reasonable price that could be paid to adequately compensate for it. "

May 8, 2025

The Cognitive Revolution

View Source

Steven Adler

Research and Programs

Departed late 2024

"OpenAI promised nonprofit control over this incredibly significant for-profit entity that it was building. [...] I'm pretty concerned about giving up the nonprofit's control, and it's not clear to me that there is a reasonable price that could be paid to adequately compensate for it. "

May 8, 2025

The Cognitive Revolution

View Source

Helen Toner

Member of the Board

Departed late 2023

“OpenAI keeps telling us about how they're a nonprofit with a public benefit mission. It's getting harder and harder to believe them.”


June 6, 2025

Twitter/X

View Source

Helen Toner

Member of the Board

Departed late 2023

“OpenAI keeps telling us about how they're a nonprofit with a public benefit mission. It's getting harder and harder to believe them.”


June 6, 2025

Twitter/X

View Source

Helen Toner

Member of the Board

Departed late 2023

“OpenAI keeps telling us about how they're a nonprofit with a public benefit mission. It's getting harder and harder to believe them.”


June 6, 2025

Twitter/X

View Source

William Saunders

Member of Technical Staff

Departed early 2024

“I’m most concerned about what this means for governance of safety decisions at OpenAI. If the non-profit board is no longer in control of these decisions and Sam Altman holds a significant equity stake, this creates more incentive to race and cut corners.”


September 27, 2024

The Guardian

View Source

William Saunders

Member of Technical Staff

Departed early 2024

“I’m most concerned about what this means for governance of safety decisions at OpenAI. If the non-profit board is no longer in control of these decisions and Sam Altman holds a significant equity stake, this creates more incentive to race and cut corners.”


September 27, 2024

The Guardian

View Source

William Saunders

Member of Technical Staff

Departed early 2024

“I’m most concerned about what this means for governance of safety decisions at OpenAI. If the non-profit board is no longer in control of these decisions and Sam Altman holds a significant equity stake, this creates more incentive to race and cut corners.”


September 27, 2024

The Guardian

View Source

Anish Tondwalkar

Member of Technical Staff

Departed 2024

“OpenAI used its non-profit mission—to ensure that AGI reflects our values—in order to solicit donations and recruit away from their for-profit competitors. Now is not the time to abandon that mission.”

April 23, 2025

Twitter/X

View Source

Anish Tondwalkar

Member of Technical Staff

Departed 2024

“OpenAI used its non-profit mission—to ensure that AGI reflects our values—in order to solicit donations and recruit away from their for-profit competitors. Now is not the time to abandon that mission.”

April 23, 2025

Twitter/X

View Source

Anish Tondwalkar

Member of Technical Staff

Departed 2024

“OpenAI used its non-profit mission—to ensure that AGI reflects our values—in order to solicit donations and recruit away from their for-profit competitors. Now is not the time to abandon that mission.”

April 23, 2025

Twitter/X

View Source

Carroll Wainwright

Member of Technical Staff

Departed early 2024

“OpenAI was structured as a non-profit, but it acted like a for-profit. The non-profit mission was a promise to do the right thing when the stakes got high. Now that the stakes are high, the non-profit structure is being abandoned, which means the promise was ultimately empty.”

September 29, 2024

Twitter/X

View Source

Carroll Wainwright

Member of Technical Staff

Departed early 2024

“OpenAI was structured as a non-profit, but it acted like a for-profit. The non-profit mission was a promise to do the right thing when the stakes got high. Now that the stakes are high, the non-profit structure is being abandoned, which means the promise was ultimately empty.”

September 29, 2024

Twitter/X

View Source

Carroll Wainwright

Member of Technical Staff

Departed early 2024

“OpenAI was structured as a non-profit, but it acted like a for-profit. The non-profit mission was a promise to do the right thing when the stakes got high. Now that the stakes are high, the non-profit structure is being abandoned, which means the promise was ultimately empty.”

September 29, 2024

Twitter/X

View Source

Regarding Safety & Transparency

Jacob Hilton

Researcher

Departed early 2023

“Given that OpenAI has previously used access to liquidity as an intimidation tactic, many former employees will still feel scared to speak out.”

May 24, 2024

Twitter/X

View Source

Jacob Hilton

Researcher

Departed early 2023

“Given that OpenAI has previously used access to liquidity as an intimidation tactic, many former employees will still feel scared to speak out.”

May 24, 2024

Twitter/X

View Source

Jacob Hilton

Researcher

Departed early 2023

“Given that OpenAI has previously used access to liquidity as an intimidation tactic, many former employees will still feel scared to speak out.”

May 24, 2024

Twitter/X

View Source

Gretchen Krueger

Policy Researcher

Departed early 2024

“I resigned a few hours before hearing the news about [Sutskever and Leike], and I made my decision independently. I share their concerns. I also have additional and overlapping concerns. We [as OpenAI] need to do more to improve foundational things like decision-making processes; accountability; transparency; documentation; policy enforcement; the care with which we use our own technology; and mitigations for impacts on inequality, rights, and the environment.”

May 22, 2024

Twitter/X

View Source

Gretchen Krueger

Policy Researcher

Departed early 2024

“I resigned a few hours before hearing the news about [Sutskever and Leike], and I made my decision independently. I share their concerns. I also have additional and overlapping concerns. We [as OpenAI] need to do more to improve foundational things like decision-making processes; accountability; transparency; documentation; policy enforcement; the care with which we use our own technology; and mitigations for impacts on inequality, rights, and the environment.”

May 22, 2024

Twitter/X

View Source

Gretchen Krueger

Policy Researcher

Departed early 2024

“I resigned a few hours before hearing the news about [Sutskever and Leike], and I made my decision independently. I share their concerns. I also have additional and overlapping concerns. We [as OpenAI] need to do more to improve foundational things like decision-making processes; accountability; transparency; documentation; policy enforcement; the care with which we use our own technology; and mitigations for impacts on inequality, rights, and the environment.”

May 22, 2024

Twitter/X

View Source

Daniel Kokotajlo

Member of Governance Team

Departed early 2024

“I joined with substantial hope that OpenAI would rise to the occasion and behave more responsibly as they got closer to achieving AGI. It slowly became clear to many of us that this would not happen … I gradually lost trust in OpenAI leadership and their ability to responsibly handle AGI, so I quit.”

May 18, 2024

Vox

View Source

Daniel Kokotajlo

Member of Governance Team

Departed early 2024

“I joined with substantial hope that OpenAI would rise to the occasion and behave more responsibly as they got closer to achieving AGI. It slowly became clear to many of us that this would not happen … I gradually lost trust in OpenAI leadership and their ability to responsibly handle AGI, so I quit.”

May 18, 2024

Vox

View Source

Daniel Kokotajlo

Member of Governance Team

Departed early 2024

“I joined with substantial hope that OpenAI would rise to the occasion and behave more responsibly as they got closer to achieving AGI. It slowly became clear to many of us that this would not happen … I gradually lost trust in OpenAI leadership and their ability to responsibly handle AGI, so I quit.”

May 18, 2024

Vox

View Source

Daniel Kokotajlo

Member of Governance Team

Departed early 2024

“Part of why I left OpenAI is that I just don’t think the company is dispositionally on track to make the right decisions that it would need to make to address the two risks that we just talked about. So I think that we’re not on track to have figured out how to actually control superintelligences, and we’re not on track to have figured out how to make it democratic control instead of just a crazy possible dictatorship.”

May 15, 2025

The New York Times

View Source

Daniel Kokotajlo

Member of Governance Team

Departed early 2024

“Part of why I left OpenAI is that I just don’t think the company is dispositionally on track to make the right decisions that it would need to make to address the two risks that we just talked about. So I think that we’re not on track to have figured out how to actually control superintelligences, and we’re not on track to have figured out how to make it democratic control instead of just a crazy possible dictatorship.”

May 15, 2025

The New York Times

View Source

Daniel Kokotajlo

Member of Governance Team

Departed early 2024

“Part of why I left OpenAI is that I just don’t think the company is dispositionally on track to make the right decisions that it would need to make to address the two risks that we just talked about. So I think that we’re not on track to have figured out how to actually control superintelligences, and we’re not on track to have figured out how to make it democratic control instead of just a crazy possible dictatorship.”

May 15, 2025

The New York Times

View Source

Daniel Kokotajlo

Member of Governance Team

Departed early 2024

“In April, I resigned from OpenAI after losing confidence that the company would behave responsibly in its attempt to build artificial general intelligence”

June 4, 2024

Twitter/X

View Source

Daniel Kokotajlo

Member of Governance Team

Departed early 2024

“In April, I resigned from OpenAI after losing confidence that the company would behave responsibly in its attempt to build artificial general intelligence”

June 4, 2024

Twitter/X

View Source

Daniel Kokotajlo

Member of Governance Team

Departed early 2024

“In April, I resigned from OpenAI after losing confidence that the company would behave responsibly in its attempt to build artificial general intelligence”

June 4, 2024

Twitter/X

View Source

Jan Leike

Head of Alignment

Departed early 2024

“Over the past few months my team has been sailing against the wind. Sometimes we were struggling for compute and it was getting harder and harder to get this crucial research done […] Over the past years, safety culture and processes have taken a backseat to shiny products."

May 17, 2024

Twitter/X

View Source

Jan Leike

Head of Alignment

Departed early 2024

“Over the past few months my team has been sailing against the wind. Sometimes we were struggling for compute and it was getting harder and harder to get this crucial research done […] Over the past years, safety culture and processes have taken a backseat to shiny products."

May 17, 2024

Twitter/X

View Source

Jan Leike

Head of Alignment

Departed early 2024

“Over the past few months my team has been sailing against the wind. Sometimes we were struggling for compute and it was getting harder and harder to get this crucial research done […] Over the past years, safety culture and processes have taken a backseat to shiny products."

May 17, 2024

Twitter/X

View Source

William Saunders

Member of Technical Staff

Departed early 2024

“When I was at OpenAI, there were long periods of time where there were vulnerabilities that would have allowed me or hundreds of other engineers at the company to bypass access controls and steal the company’s most advanced AI systems including GPT-4. … OpenAI will say that they are improving. I and other employees who resigned doubt they will be ready in time.”

September 17, 2024

U.S. Senate Hearing

View Source

William Saunders

Member of Technical Staff

Departed early 2024

“When I was at OpenAI, there were long periods of time where there were vulnerabilities that would have allowed me or hundreds of other engineers at the company to bypass access controls and steal the company’s most advanced AI systems including GPT-4. … OpenAI will say that they are improving. I and other employees who resigned doubt they will be ready in time.”

September 17, 2024

U.S. Senate Hearing

View Source

William Saunders

Member of Technical Staff

Departed early 2024

“When I was at OpenAI, there were long periods of time where there were vulnerabilities that would have allowed me or hundreds of other engineers at the company to bypass access controls and steal the company’s most advanced AI systems including GPT-4. … OpenAI will say that they are improving. I and other employees who resigned doubt they will be ready in time.”

September 17, 2024

U.S. Senate Hearing

View Source

Leopold Aschenbrenner

Member of the Superalignment Team

Departed early 2024

“There was a commitment around superalignment compute, dedicating 20% of compute for long-term safety research. You and I could have a totally reasonable debate about the appropriate level of compute for superalignment. That’s not really the issue. The issue is that the commitment was made and it was used to recruit people. It was very public. It was made because there was a recognition that there would always be something more urgent than long-term safety research, like a new product. In the end, they just didn’t keep the commitment. There was always something more urgent than long-term safety research.”

June 4, 2024

Dwarkesh Podcast

View Source

Leopold Aschenbrenner

Member of the Superalignment Team

Departed early 2024

“There was a commitment around superalignment compute, dedicating 20% of compute for long-term safety research. You and I could have a totally reasonable debate about the appropriate level of compute for superalignment. That’s not really the issue. The issue is that the commitment was made and it was used to recruit people. It was very public. It was made because there was a recognition that there would always be something more urgent than long-term safety research, like a new product. In the end, they just didn’t keep the commitment. There was always something more urgent than long-term safety research.”

June 4, 2024

Dwarkesh Podcast

View Source

Leopold Aschenbrenner

Member of the Superalignment Team

Departed early 2024

“There was a commitment around superalignment compute, dedicating 20% of compute for long-term safety research. You and I could have a totally reasonable debate about the appropriate level of compute for superalignment. That’s not really the issue. The issue is that the commitment was made and it was used to recruit people. It was very public. It was made because there was a recognition that there would always be something more urgent than long-term safety research, like a new product. In the end, they just didn’t keep the commitment. There was always something more urgent than long-term safety research.”

June 4, 2024

Dwarkesh Podcast

View Source

Leopold Aschenbrenner

Member of the Superalignment Team

Departed early 2024

“Last year, I wrote an internal memo about OpenAI's security, which I thought was egregiously insufficient to protect against the theft of model weights or key algorithmic secrets from foreign actors. I shared this memo with a few colleagues and a couple of members of leadership, who mostly said it was helpful … The reason I bring this up is that when I was fired, it was very made explicit that the security memo was a major reason for my being fired. They said, ‘the reason this is a firing and not a warning is because of the security memo.’”

June 3, 2024

Dwarkesh Podcast

View Source

Leopold Aschenbrenner

Member of the Superalignment Team

Departed early 2024

“Last year, I wrote an internal memo about OpenAI's security, which I thought was egregiously insufficient to protect against the theft of model weights or key algorithmic secrets from foreign actors. I shared this memo with a few colleagues and a couple of members of leadership, who mostly said it was helpful … The reason I bring this up is that when I was fired, it was very made explicit that the security memo was a major reason for my being fired. They said, ‘the reason this is a firing and not a warning is because of the security memo.’”

June 3, 2024

Dwarkesh Podcast

View Source

Leopold Aschenbrenner

Member of the Superalignment Team

Departed early 2024

“Last year, I wrote an internal memo about OpenAI's security, which I thought was egregiously insufficient to protect against the theft of model weights or key algorithmic secrets from foreign actors. I shared this memo with a few colleagues and a couple of members of leadership, who mostly said it was helpful … The reason I bring this up is that when I was fired, it was very made explicit that the security memo was a major reason for my being fired. They said, ‘the reason this is a firing and not a warning is because of the security memo.’”

June 3, 2024

Dwarkesh Podcast

View Source

Other Testimonies

Rosie Campbell

Member of Trust and Safety, Policy

Departed late 2024

“I’ve been unsettled by some of the shifts over the last ~year, and the loss of so many people who shaped our culture.”

November 30, 2024

Personal Blog

View Source

Rosie Campbell

Member of Trust and Safety, Policy

Departed late 2024

“I’ve been unsettled by some of the shifts over the last ~year, and the loss of so many people who shaped our culture.”

November 30, 2024

Personal Blog

View Source

Rosie Campbell

Member of Trust and Safety, Policy

Departed late 2024

“I’ve been unsettled by some of the shifts over the last ~year, and the loss of so many people who shaped our culture.”

November 30, 2024

Personal Blog

View Source

Richard Ngo

Research Scientist

Departed late 2024

“While the ‘making AGI’ part of the mission seems well on track, it feels like I (and others) have gradually realized how much harder it is to contribute in a robustly positive way to the ‘go well’ part of the mission.”

November 13, 2024

Twitter/X

View Source

Richard Ngo

Research Scientist

Departed late 2024

“While the ‘making AGI’ part of the mission seems well on track, it feels like I (and others) have gradually realized how much harder it is to contribute in a robustly positive way to the ‘go well’ part of the mission.”

November 13, 2024

Twitter/X

View Source

Richard Ngo

Research Scientist

Departed late 2024

“While the ‘making AGI’ part of the mission seems well on track, it feels like I (and others) have gradually realized how much harder it is to contribute in a robustly positive way to the ‘go well’ part of the mission.”

November 13, 2024

Twitter/X

View Source

Scott Aaronson

Safety Researcher

Departed mid 2024

“OpenAI, and Sam Altman in particular, say that they oppose SB 1047 simply because AI regulation should be handled at the federal rather than the state level … On this issue as on others, it seems to me that anyone who’s serious about a problem doesn’t get to reject a positive step that’s on offer, in favor of a utopian solution that isn’t on offer.”

September 4, 2024

Personal Blog

View Source

Scott Aaronson

Safety Researcher

Departed mid 2024

“OpenAI, and Sam Altman in particular, say that they oppose SB 1047 simply because AI regulation should be handled at the federal rather than the state level … On this issue as on others, it seems to me that anyone who’s serious about a problem doesn’t get to reject a positive step that’s on offer, in favor of a utopian solution that isn’t on offer.”

September 4, 2024

Personal Blog

View Source

Scott Aaronson

Safety Researcher

Departed mid 2024

“OpenAI, and Sam Altman in particular, say that they oppose SB 1047 simply because AI regulation should be handled at the federal rather than the state level … On this issue as on others, it seems to me that anyone who’s serious about a problem doesn’t get to reject a positive step that’s on offer, in favor of a utopian solution that isn’t on offer.”

September 4, 2024

Personal Blog

View Source

Suchir Balaji

Member of Technical Staff

Departed late 2024

“If you believe what I believe, you have to just leave the company … it is time for Congress to step in.”


October 23, 2024

The New York Times

View Source

Suchir Balaji

Member of Technical Staff

Departed late 2024

“If you believe what I believe, you have to just leave the company … it is time for Congress to step in.”


October 23, 2024

The New York Times

View Source

Suchir Balaji

Member of Technical Staff

Departed late 2024

“If you believe what I believe, you have to just leave the company … it is time for Congress to step in.”


October 23, 2024

The New York Times

View Source

Helen Toner

Member of the Board

Departed late 2023

“My experience on the board of OpenAI taught me how fragile internal guardrails are when money is on the line, and why it's imperative that policymakers step in.”


September 17, 2024

U.S. Senate Hearing

View Source

Helen Toner

Member of the Board

Departed late 2023

“My experience on the board of OpenAI taught me how fragile internal guardrails are when money is on the line, and why it's imperative that policymakers step in.”


September 17, 2024

U.S. Senate Hearing

View Source

Helen Toner

Member of the Board

Departed late 2023

“My experience on the board of OpenAI taught me how fragile internal guardrails are when money is on the line, and why it's imperative that policymakers step in.”


September 17, 2024

U.S. Senate Hearing

View Source

Cullen O'Keefe

Research Lead

Departed early 2024

“While the risk of nuclear catastrophe still haunts us, we are all much safer due to the steps the U.S. took last century to manage this risk… AI may bring risks of a similar magnitude this century.”

July 10, 2024

Lawfare

View Source

Cullen O'Keefe

Research Lead

Departed early 2024

“While the risk of nuclear catastrophe still haunts us, we are all much safer due to the steps the U.S. took last century to manage this risk… AI may bring risks of a similar magnitude this century.”

July 10, 2024

Lawfare

View Source

Cullen O'Keefe

Research Lead

Departed early 2024

“While the risk of nuclear catastrophe still haunts us, we are all much safer due to the steps the U.S. took last century to manage this risk… AI may bring risks of a similar magnitude this century.”

July 10, 2024

Lawfare

View Source

Miles Brundage

Head of Policy Research

Departed late 2024

“In short, neither OpenAI nor any other frontier lab is ready [for the arrival of powerful AI], and the world is also not ready.”

October 23, 2024

Personal Blog

View Source

Miles Brundage

Head of Policy Research

Departed late 2024

“In short, neither OpenAI nor any other frontier lab is ready [for the arrival of powerful AI], and the world is also not ready.”

October 23, 2024

Personal Blog

View Source

Miles Brundage

Head of Policy Research

Departed late 2024

“In short, neither OpenAI nor any other frontier lab is ready [for the arrival of powerful AI], and the world is also not ready.”

October 23, 2024

Personal Blog

View Source

Daniel Ziegler

Member of Technical Staff

Departed early 2021

“So long as there is no effective government oversight of these corporations, current and former employees are among the few people who can hold them accountable to the public. Yet broad confidentiality agreements block us from voicing our concerns, except to the very companies that may be failing to address these issues.”

June 4, 2024

Right To Warn Letter

View Source

Daniel Ziegler

Member of Technical Staff

Departed early 2021

“So long as there is no effective government oversight of these corporations, current and former employees are among the few people who can hold them accountable to the public. Yet broad confidentiality agreements block us from voicing our concerns, except to the very companies that may be failing to address these issues.”

June 4, 2024

Right To Warn Letter

View Source

Daniel Ziegler

Member of Technical Staff

Departed early 2021

“So long as there is no effective government oversight of these corporations, current and former employees are among the few people who can hold them accountable to the public. Yet broad confidentiality agreements block us from voicing our concerns, except to the very companies that may be failing to address these issues.”

June 4, 2024

Right To Warn Letter

View Source

Carroll Wainwright

Member of Technical Staff

Departed early 2024

"AI companies have strong financial incentives to avoid effective oversight, and we do not believe bespoke structures of corporate governance are sufficient to change this."

June 4, 2024

Right To Warn Letter

View Source

Carroll Wainwright

Member of Technical Staff

Departed early 2024

"AI companies have strong financial incentives to avoid effective oversight, and we do not believe bespoke structures of corporate governance are sufficient to change this."

June 4, 2024

Right To Warn Letter

View Source

Carroll Wainwright

Member of Technical Staff

Departed early 2024

"AI companies have strong financial incentives to avoid effective oversight, and we do not believe bespoke structures of corporate governance are sufficient to change this."

June 4, 2024

Right To Warn Letter

View Source

In total, OpenAI has lost a significant number of staff who focused on AI safety, whose departure was sometimes suspected to have stemmed from disagreements about OpenAI’s approach to safety issues, or who were important to OpenAI’s ethics and policy work:

In total, OpenAI has lost a significant number of staff who focused on AI safety, whose departure was sometimes suspected to have stemmed from disagreements about OpenAI’s approach to safety issues, or who were important to OpenAI’s ethics and policy work:

John Schulman

Head of Alignment Science

Departed late 2024


Pavel Izmailov

Member of the Superalignment Team

Departed early 2024


Beth Barnes

Researcher, Alignment Team

Departed late 2022


Lilian Weng

VP of Research, Head of Safety Systems

Departed late 2024


Scott Aaronson

Safety Researcher

Departed mid 2024 


Ilya Sutskever

Co-founder, co-lead of Superalignment Team

Departed early 2024


Jan Leike

Co-lead of Superalignment Team

Departed early 2024


Ryan Lowe

Member of Technical Staff

Departed early 2024


Todor Markov 

Member of Technical Staff, Safety Systems and Preparedness 

Departed early 2024 


Steven Adler

Research and Programs, AI Safety

Departed late 2024


Leopold Aschenbrenner

Member of the Superalignment Team

Departed early 2024


Dario Amodei

VP of Research

Departed late 2020


Harri Edwards

Research Scientist

Departed late 2024


Neil Chowdhury

Member of Technical Staff

Departed late 2024


William Saunders

Member of Technical Staff

Departed early 2024


Geoffrey Irving

Member of Technical Staff

Departed late 2019


Carroll Wainwright

Member of Technical Staff

Departed early 2024

John Shulman

Head of Alignment Science

Departed late 2024


Pavel Izmailov

Member of the Superalignment Team

Departed early 2024


Beth Barnes

Researcher, Alignment Team

Departed late 2022


Lilian Weng

VP of Research, Head of Safety Systems

Departed late 2024


Scott Aaronson

Safety Researcher

Departed mid 2024 


Ilya Sutsekever

Co-founder, co-lead of Superalignment Team

Departed early 2024


Jan Leike

Co-lead of Superalignment Team

Departed early 2024


Ryan Lowe

Member of Technical Staff

Departed early 2024


Todor Markov 

Member of Technical Staff, Safety Systems and Preparedness 

Departed early 2024 


Steven Adler

Research and Programs, AI Safety

Departed late 2024


Leopold Aschenbrenner

Member of the Superalignment Team

Departed early 2024


Dario Amodei

VP of Safety and Policy

Departed late 2020


Harri Edwards

Research Scientist

Departed late 2024


Neil Chowdhury

Member of Technical Staff

Departed late 2024


William Saunders

Member of Technical Staff

Departed early 2024


Geoffrey Irving

Member of Technical Staff

Departed late 2019

Jack Clark

Policy Director

Departed late 2020


Chris Clark

Head of Nonprofit and Strategic Initiatives

Departed early 2024


Sherry Lachman

Head of Social Impact

Departed mid 2024


Gretchen Krueger

Policy Research

Departed early 2024


Daniel Kokotajlo

Member of Governance Team

Departed early 2024


Tasha McCauley

Member of the Board

Departed late 2023


Page Hedley

Policy and Ethics Advisor

Departed early 2019


Girish Sastry

Researcher (Policy)

Departed late 2024

Helen Toner

Member of the Board

Departed late 2023


Daniela Amodei

VP of Safety and Policy

Departed late 2020

Cullen O'Keefe

Research Lead

Departed early 2024


Miles Brundage

Head of Policy Research

Departed late 2024


Rosie Campbell

Trust and Safety + Policy

Departed late 2024


Ashyana-Jasmine Kachra

Policy Research

Departed mid 2024 


Richard Ngo

Research Scientist

Departed late 2024


Sam McCandlish

Research Lead

Departed late 2020


Tom Henighan

Member of Technical Staff

Departed early 2021

Steven Bills

Member of Technical Staff

Departed mid 2024

Tom Brown

Research Engineer

Departed late 2020


Nicholas Joseph

Member of Technical Staff

Departed early 2021


Collin Burns

Member of Technical Staff

Departed mid 2021


Amanda Askell

Research Scientist

Departed early 2021


Jonathan Uesato

Member of Technical Staff

Departed mid 2024


Benjamin Mann

Member of Technical Staff

Departed late 2020


Paul Christiano

Member of Technical Staff

Departed early 2021


Jan Hendrik Kirchner

Member of Technical Staff

Departed mid 2024


Yuri Burda

Member of Technical Staff

Departed late 2024


Mati Roy 

Member of Technical Staff 

Departed mid 2024 


Chris Olah

Member of Technical Staff

Departed late 2020


Sam McCandlish

Research Lead

Departed late 2020


Mira Murati

Chief Technical Officer

Departed late 2024


Nisan Stiennon

Member of Technical Staff

Departed late 2020

Jeffrey Wu

Member of Technical Staff

Departed mid 2024


Anish Tondwalkar

Member of Technical Staff

Departed 2024


Jacob Hilton

Researcher

Departed early 2023


Suchir Balaji

Member of Technical Staff

Departed late 2024


Daniel Ziegler

Member of Technical Staff

Departed early 2021

Jack Clark

Policy Director

Departed late 2020


Chris Clark

Head of Nonprofit and Strategic Initiatives

Departed early 2024


Sherry Lachman

Head of Social Impact

Departed mid 2024


Gretchen Krueger

Policy Researcher

Departed early 2024


Daniel Kokotajlo

Member of Governance Team

Departed early 2024


Tasha McCauley

Member of the Board

Departed late 2023


Page Hedley

Policy and Ethics Advisor

Departed early 2019


Girish Sastry

Policy Research

Departed late 2024

Helen Toner

Member of the Board

Departed late 2023


Daniela Amodei

VP of Safety and Policy

Departed late 2020

Cullen O'Keefe

Research Lead

Departed early 2024


Miles Brundage

Head of Policy Research

Departed late 2024


Rosie Campbell

Trust and Safety + Policy

Departed late 2024


Ashyana-Jasmine Kachra

Policy Research

Departed mid 2024 


Richard Ngo

Research Scientist

Departed late 2024


Sam McCandlish

Research Lead

Departed late 2020


Tom Henighan

Member of Technical Staff

Departed early 2021


Steven Bills

Member of Technical Staff

Departed mid 2024

Tom Brown

Research Engineer

Departed late 2020


Nicholas Joseph

Member of Technical Staff

Departed early 2021


Collin Burns

Member of Technical Staff

Departed mid 2021


Amanda Askell

Research Scientist

Departed early 2021


Jonathan Uesato

Member of Technical Staff

Departed mid 2024


Benjamin Mann

Member of Technical Staff

Departed late 2020


Paul Christiano

Member of Technical Staff

Departed early 2021


Jan Hendrik Kirchner

Member of Technical Staff

Departed mid 2024


Yuri Burda

Member of Technical Staff

Departed late 2024


Mati Roy 

Member of Technical Staff 

Departed mid 2024 


Chris Olah

Member of Technical Staff

Departed late 2020


Mira Murati

Chief Technical Officer

Departed late 2024


Nisan Stiennon

Member of Technical Staff

Departed late 2020

Jeffrey Wu

Member of Technical Staff

Departed mid 2024


Anish Tondwalkar

Member of Technical Staff

Departed 2024


Jacob Hilton

Researcher

Departed early 2023


Suchir Balaji

Member of Technical Staff

Departed late 2024


Daniel Ziegler

Member of Technical Staff

Departed early 2021

Tom Brown

Research Engineer

Departed late 2020


Nicholas Joseph

Member of Technical Staff

Departed early 2021


Collin Burns

Member of Technical Staff

Departed mid 2021


Amanda Askell

Research Scientist

Departed early 2021


Jonathan Uesato

Member of Technical Staff

Departed mid 2024


Benjamin Mann

Member of Technical Staff

Departed late 2020


Paul Christiano

Member of Technical Staff

Departed early 2021


Jan Hendrik Kirchner

Member of Technical Staff

Departed mid 2024


Yuri Burda

Member of Technical Staff

Departed late 2024


Mati Roy 

Member of Technical Staff 

Departed mid 2024 


Chris Olah

Member of Technical Staff

Departed late 2020


Mira Murati

Chief Technical Officer

Departed late 2024


Nisan Stiennon

Member of Technical Staff

Departed late 2020

Jeffrey Wu

Member of Technical Staff

Departed mid 2024


Anish Tondwalkar

Member of Technical Staff

Departed 2024


Jacob Hilton

Researcher

Departed early 2023


Suchir Balaji

Member of Technical Staff

Departed late 2024


Daniel Ziegler

Member of Technical Staff

Departed early 2021


Carroll Wainwright

Member of Technical Staff

Departed early 2024

Image for Nonprofit Board Member

September 26, 2024

The Economist

What does the OpenAI exodus say about Sam Altman?

The resignation of one of Mr Altman’s most prominent deputies, along with those of two other senior executives, is the latest in an exodus of OpenAI veterans, including most of those who co-founded the firm in 2015. It is taking place amid efforts by OpenAI to raise $6.5bn at a stonkingly high valuation and revives serious questions about Mr Altman’s leadership qualities.

Image for Nonprofit Board Member

September 26, 2024

The Economist

What does the OpenAI exodus say about Sam Altman?

The resignation of one of Mr Altman’s most prominent deputies, along with those of two other senior executives, is the latest in an exodus of OpenAI veterans, including most of those who co-founded the firm in 2015. It is taking place amid efforts by OpenAI to raise $6.5bn at a stonkingly high valuation and revives serious questions about Mr Altman’s leadership qualities.

Image for Nonprofit Board Member

September 26, 2024

The Economist

What does the OpenAI exodus say about Sam Altman?

The resignation of one of Mr Altman’s most prominent deputies, along with those of two other senior executives, is the latest in an exodus of OpenAI veterans, including most of those who co-founded the firm in 2015. It is taking place amid efforts by OpenAI to raise $6.5bn at a stonkingly high valuation and revives serious questions about Mr Altman’s leadership qualities.

The OpenAI Files is the most comprehensive collection to date of publicly documented concerns with governance practices, leadership integrity, and organizational culture at OpenAI.

© 2025 The Midas Project & The Tech Oversight Project. All rights reserved.

The OpenAI Files was created with complete editorial independence. We have received no funding, editorial direction, assistance, or support of any kind from Elon Musk, xAI, Anthropic, Meta, Google, Microsoft, or any other OpenAI competitor. This report is guided solely by our commitment to corporate accountability and public interest research.

The OpenAI Files is the most comprehensive collection to date of publicly documented concerns with governance practices, leadership integrity, and organizational culture at OpenAI.

© 2025 The Midas Project & The Tech Oversight Project. All rights reserved.

The OpenAI Files was created with complete editorial independence. We have received no funding, editorial direction, assistance, or support of any kind from Elon Musk, xAI, Anthropic, Meta, Google, Microsoft, or any other OpenAI competitor. This report is guided solely by our commitment to corporate accountability and public interest research.

© 2025 The Midas Project & The Tech Oversight Project.

All rights reserved.

The OpenAI Files was created with complete editorial independence. We have received no funding, editorial direction, assistance, or support of any kind from Elon Musk, xAI, Anthropic, Meta, Google, Microsoft, or any other OpenAI competitor. This report is guided solely by our commitment to corporate accountability and public interest research.

The OpenAI Files is the most comprehensive collection to date of publicly documented concerns with governance practices, leadership integrity, and organizational culture at OpenAI.