RESEARCH REPORT

The OpenAI Files

Published June 18, 2025

Last Updated June 18, 2025

The OpenAI Files is the most comprehensive collection to date of documented concerns with governance practices, leadership integrity, and organizational culture at OpenAI.

Key Findings

Our investigation has identified four major areas of concern.

Restructuring

CEO Integrity

Transparency & Safety

Conflicts of Interest

Restructuring

Analysis of planned changes to the nonprofit's relationship with its for-profit subsidiary

Summary of Findings

OpenAI plans to remove limits on investor returns: OpenAI once capped investor profits at a maximum of 100x to ensure that, if the company succeeds in building AI capable of automating all human labor, the proceeds would go to humanity. They have now announced plans to remove that cap.

OpenAI portrays itself as preserving nonprofit control while potentially disempowering the nonprofit: OpenAI claims to have reversed course on a decision to abandon nonprofit control, but the details suggest that the nonprofit’s board would no longer have all the authority it would need to hold OpenAI accountable to its mission.

Investors pressured OpenAI to make structural changes: OpenAI has admitted that it is making these changes to appease investors who have made their funding conditional on structural reforms, including allowing unlimited returns—exactly the type of investor influence OpenAI’s original structure was designed to prevent.

Restructuring

CEO Integrity

Transparency & Safety

Conflicts of Interest

Restructuring

Analysis of planned changes to the nonprofit's relationship with its for-profit subsidiary

Summary of Findings

OpenAI plans to remove limits on investor returns: OpenAI once capped investor profits at a maximum of 100x to ensure that, if the company succeeds in building AI capable of automating all human labor, the proceeds would go to humanity. They have now announced plans to remove that cap.

OpenAI portrays itself as preserving nonprofit control while potentially disempowering the nonprofit: OpenAI claims to have reversed course on a decision to abandon nonprofit control, but the details suggest that the nonprofit’s board would no longer have all the authority it would need to hold OpenAI accountable to its mission.

Investors pressured OpenAI to make structural changes: OpenAI has admitted that it is making these changes to appease investors who have made their funding conditional on structural reforms, including allowing unlimited returns—exactly the type of investor influence OpenAI’s original structure was designed to prevent.

Restructuring

CEO Integrity

Transparency & Safety

Conflicts of Interest

Restructuring

Analysis of planned changes to the nonprofit's relationship with its for-profit subsidiary

Summary of Findings

OpenAI plans to remove limits on investor returns: OpenAI once capped investor profits at a maximum of 100x to ensure that, if the company succeeds in building AI capable of automating all human labor, the proceeds would go to humanity. They have now announced plans to remove that cap.

OpenAI portrays itself as preserving nonprofit control while potentially disempowering the nonprofit: OpenAI claims to have reversed course on a decision to abandon nonprofit control, but the details suggest that the nonprofit’s board would no longer have all the authority it would need to hold OpenAI accountable to its mission.

Investors pressured OpenAI to make structural changes: OpenAI has admitted that it is making these changes to appease investors who have made their funding conditional on structural reforms, including allowing unlimited returns—exactly the type of investor influence OpenAI’s original structure was designed to prevent.

OpenAI began as a nonprofit with an explicit mission to ensure that artificial intelligence benefits all of humanity. But now, only ten years later, it has begun reshaping itself into a $300 billion for-profit enterprise, trading its legal obligation to humanity for the right to generate unlimited investor returns.

OpenAI began as a nonprofit with an explicit mission to ensure that artificial intelligence benefits all of humanity. But now, only ten years later, it has begun reshaping itself into a $300 billion for-profit enterprise, trading its legal obligation to humanity for the right to generate unlimited investor returns.

OpenAI began as a nonprofit with an explicit mission to ensure that artificial intelligence benefits all of humanity. But now, only ten years later, it has begun reshaping itself into a $300 billion for-profit enterprise, trading its legal obligation to humanity for the right to generate unlimited investor returns.

This restructuring may mark the culmination of a decade of seismic change at the San Francisco AI developer. Dozens of reports from former employees reveal a consistent pattern of conflict at the research-lab-turned-tech-behemoth: a company whose lofty promises of safe, responsible AI development have repeatedly crumbled under the weight of market forces, and a famously charismatic CEO who, upon closer inspection, seems to embody the same contradictions that define the organization itself.

This restructuring may mark the culmination of a decade of seismic change at the San Francisco AI developer. Dozens of reports from former employees reveal a consistent pattern of conflict at the research-lab-turned-tech-behemoth: a company whose lofty promises of safe, responsible AI development have repeatedly crumbled under the weight of market forces, and a famously charismatic CEO who, upon closer inspection, seems to embody the same contradictions that define the organization itself.

This restructuring may mark the culmination of a decade of seismic change at the San Francisco AI developer. Dozens of reports from former employees reveal a consistent pattern of conflict at the research-lab-turned-tech-behemoth: a company whose lofty promises of safe, responsible AI development have repeatedly crumbled under the weight of market forces, and a famously charismatic CEO who, upon closer inspection, seems to embody the same contradictions that define the organization itself.

This restructuring isn't just a bureaucratic formality. It's the final unmasking of a company that can no longer sustain its founding myth, and a natural experiment in what happens when idealism collides with economic forces. The question isn't whether OpenAI's founders had good intentions. The question is whether good intentions can co-exist with $60 billion in venture funding and the gravitational pull of what may prove to be the most profitable technology in history.

This restructuring isn't just a bureaucratic formality. It's the final unmasking of a company that can no longer sustain its founding myth, and a natural experiment in what happens when idealism collides with economic forces. The question isn't whether OpenAI's founders had good intentions. The question is whether good intentions can co-exist with $60 billion in venture funding and the gravitational pull of what may prove to be the most profitable technology in history.

This restructuring isn't just a bureaucratic formality. It's the final unmasking of a company that can no longer sustain its founding myth, and a natural experiment in what happens when idealism collides with economic forces. The question isn't whether OpenAI's founders had good intentions. The question is whether good intentions can co-exist with $60 billion in venture funding and the gravitational pull of what may prove to be the most profitable technology in history.

We believe OpenAI still has a narrow window to reclaim its mission. Here's how.

We believe OpenAI still has a narrow window to reclaim its mission. Here's how.

We believe OpenAI still has a narrow window to reclaim its mission. Here's how.

OpenAI’s Transformation

OpenAI was founded in 2015 as a nonprofit organization with the mission of making sure AGI benefits humanity. By 2019, it had transitioned to a hybrid structure with a “capped-profit” subsidiary—a commercially-focused for-profit company, but one with a legal duty to the OpenAI mission, and with 100x caps on how much money investors could make. 

Now, under investor pressure, OpenAI is attempting to remove those profit caps as they restructure into a public benefit corporation without a fiduciary duty to humanity. They will claim that the nonprofit is retaining control—and in some limited sense it is—but the truth is that their mission may still be sidelined.

Then

"We’ve designed OpenAI LP to put our overall mission—ensuring the creation and adoption of safe and beneficial AGI—ahead of generating returns for investors [. . .] Regardless of how the world evolves, we are committed—legally and personally—to our mission."

OpenAI

March 11, 2019

OpenAI LP Announcement

View Original Source

Now

“Our plan is to transform our existing for-profit into a Delaware Public Benefit Corporation⁠ with ordinary shares of stock … The PBC is a structure used⁠ by many⁠ others⁠ that requires the company to balance shareholder interests, stakeholder interests, and a public benefit interest in its decision-making.”

OpenAI

December 27, 2024

Why OpenAI's Structure Must Evolve to Advance Our Mission

A Culture of Recklessness and Secrecy

Evidence suggests that OpenAI is avoiding transparency and deprioritizing safety practices as it races to commercialize new models.

Former OpenAI employees were pressured into signing highly restrictive non-disclosure and non-disparagement agreements.
Former OpenAI employees were pressured into signing highly restrictive non-disclosure and non-disparagement agreements.

2024

I have seen the extremely restrictive off-boarding agreement that contains nondisclosure and non-disparagement provisions former OpenAI employees are subject to. It forbids them, for the rest of their lives, from criticizing their former employer. Even acknowledging that the NDA exists is a violation of it. If a departing employee declines to sign the document, or if they violate it, they can lose all vested equity they earned during their time at the company, which is likely worth millions of dollars.

—Vox, 5/18/24

View Source

I have seen the extremely restrictive off-boarding agreement that contains nondisclosure and non-disparagement provisions former OpenAI employees are subject to. It forbids them, for the rest of their lives, from criticizing their former employer. Even acknowledging that the NDA exists is a violation of it. If a departing employee declines to sign the document, or if they violate it, they can lose all vested equity they earned during their time at the company, which is likely worth millions of dollars.

—Vox, 5/18/24

I have seen the extremely restrictive off-boarding agreement that contains nondisclosure and non-disparagement provisions former OpenAI employees are subject to. It forbids them, for the rest of their lives, from criticizing their former employer. Even acknowledging that the NDA exists is a violation of it. If a departing employee declines to sign the document, or if they violate it, they can lose all vested equity they earned during their time at the company, which is likely worth millions of dollars.

—Vox, 5/18/24

View Source

Image for Nonprofit Board Member

June 4, 2024

The New York Times

OpenAI Insiders Warn of a ‘Reckless’ Race for Dominance

A group of OpenAI insiders is blowing the whistle on what they say is a culture of recklessness and secrecy at the San Francisco artificial intelligence company, which is racing to build the most powerful A.I. systems ever created.

Image for Nonprofit Board Member

June 4, 2024

The New York Times

OpenAI Insiders Warn of a ‘Reckless’ Race for Dominance

A group of OpenAI insiders is blowing the whistle on what they say is a culture of recklessness and secrecy at the San Francisco artificial intelligence company, which is racing to build the most powerful A.I. systems ever created.

Voices from Inside OpenAI

The Midas Project has collected hundreds of OpenAI documents, employee testimonies, and media reports detailing shifting norms, broken commitments, and integrity incidents. These stories paint a picture of an organization in crisis.

The Midas Project has collected hundreds of OpenAI documents, employee testimonies, and media reports detailing shifting norms, broken commitments, and integrity incidents. These stories paint a picture of an organization in crisis.

Image for Nonprofit Board Member

December 8, 2023

The Washington Post

OpenAI leaders warned of abusive behavior before Sam Altman’s ouster

The senior employees described Altman as psychologically abusive, creating chaos at the artificial-intelligence start-up — complaints that were a major factor in the board’s abrupt decision to fire the CEO

Nitasha Tiku

Read Full Article

Image for Nonprofit Board Member

December 8, 2023

The Washington Post

OpenAI leaders warned of abusive behavior before Sam Altman’s ouster

The senior employees described Altman as psychologically abusive, creating chaos at the artificial-intelligence start-up — complaints that were a major factor in the board’s abrupt decision to fire the CEO

Nitasha Tiku

Read Full Article

Image for Nonprofit Board Member

December 8, 2023

The Washington Post

OpenAI leaders warned of abusive behavior before Sam Altman’s ouster

The senior employees described Altman as psychologically abusive, creating chaos at the artificial-intelligence start-up — complaints that were a major factor in the board’s abrupt decision to fire the CEO

Nitasha Tiku

Read Full Article

Mira Murati

Chief Technical Officer

Departed late 2024

“I don’t feel comfortable about Sam leading us to AGI.”

May 15, 2025

The Atlantic

View Source

Mira Murati

Chief Technical Officer

Departed late 2024

“I don’t feel comfortable about Sam leading us to AGI.”

May 15, 2025

The Atlantic

View Source

Mira Murati

Chief Technical Officer

Departed late 2024

“I don’t feel comfortable about Sam leading us to AGI.”

May 15, 2025

The Atlantic

View Source

Jan Leike

Head of Alignment

Departed early 2024

“Over the past years, safety culture and processes have taken a backseat to shiny products."

May 17, 2024

Twitter/X

View Source

Jan Leike

Head of Alignment

Departed early 2024

“Over the past years, safety culture and processes have taken a backseat to shiny products."

May 17, 2024

Twitter/X

View Source

Jan Leike

Head of Alignment

Departed early 2024

“Over the past years, safety culture and processes have taken a backseat to shiny products."

May 17, 2024

Twitter/X

View Source

Ilya Sutskever

Co-founder, co-lead of Superalignment Team

Departed early 2024

“I don’t think Sam is the guy who should have the finger on the button for AGI.”

May 15, 2025

The Atlantic

View Source

Ilya Sutskever

Co-founder, co-lead of Superalignment Team

Departed early 2024

“I don’t think Sam is the guy who should have the finger on the button for AGI.”

May 15, 2025

The Atlantic

View Source

Ilya Sutskever

Co-founder, co-lead of Superalignment Team

Departed early 2024

“I don’t think Sam is the guy who should have the finger on the button for AGI.”

May 15, 2025

The Atlantic

View Source

A Vision for Change

In many ways, OpenAI is not a typical organization, but foremost among them is the scope of their mission. They are engaged in what may prove to be a profoundly important project, and as such, they ought to hold themselves to an unusually high standard. We present a vision for change across three areas for reform: responsible governance, ethical leadership, and shared benefits. By implementing many of these changes, OpenAI can begin to reorient toward their mission, repair their culture, and better fulfill their critically important mission.

Responsible Governance

Organizational structures must ensure that decisions are made carefully, risks are managed, and commitments are maintained.

Ethical Leadership

The standard for leaders guiding this work should be unusually high, and leadership should be carefully vetted for integrity and conflicts of interest.

Shared Benefits

Binding commitments should be upheld to ensure AI benefits are widely distributed.

The OpenAI Files is the most comprehensive collection to date of publicly documented concerns with governance practices, leadership integrity, and organizational culture at OpenAI.

© 2025 The Midas Project & The Tech Oversight Project. All rights reserved.

The OpenAI Files was created with complete editorial independence. We have received no funding, editorial direction, assistance, or support of any kind from Elon Musk, xAI, Anthropic, Meta, Google, Microsoft, or any other OpenAI competitor. This report is guided solely by our commitment to corporate accountability and public interest research.

The OpenAI Files is the most comprehensive collection to date of publicly documented concerns with governance practices, leadership integrity, and organizational culture at OpenAI.

© 2025 The Midas Project & The Tech Oversight Project. All rights reserved.

The OpenAI Files was created with complete editorial independence. We have received no funding, editorial direction, assistance, or support of any kind from Elon Musk, xAI, Anthropic, Meta, Google, Microsoft, or any other OpenAI competitor. This report is guided solely by our commitment to corporate accountability and public interest research.

© 2025 The Midas Project & The Tech Oversight Project.

All rights reserved.

The OpenAI Files was created with complete editorial independence. We have received no funding, editorial direction, assistance, or support of any kind from Elon Musk, xAI, Anthropic, Meta, Google, Microsoft, or any other OpenAI competitor. This report is guided solely by our commitment to corporate accountability and public interest research.

The OpenAI Files is the most comprehensive collection to date of publicly documented concerns with governance practices, leadership integrity, and organizational culture at OpenAI.