Vision for Change

A proposed path forward for OpenAI

OpenAI believes that humanity is, perhaps, only a handful of years away from developing technologies that could automate most human labor. This may not turn out to be true, but if it is, it would be one of humanity's most consequential endeavors.

The governance structures and leadership integrity guiding a project as important as this must reflect the magnitude and severity of the mission. The companies leading the race to AGI must be held to, and must hold themselves to, exceptionally high standards.

OpenAI could one day meet those standards, but serious changes would need to be made. We present five pillars across three areas for reform that will strengthen governance, elevate leadership quality, foster critical feedback, establish external oversight, and ensure that the benefits of AI development flow to humanity.

Core Principles

Responsible Governance

Organizational structures must ensure that decisions are made carefully, risks are managed, and commitments are maintained.

Ethical Leadership

The standard for leaders guiding this work should be unusually high, and leadership should be carefully vetted for integrity and conflicts of interest.

Shared Benefits

Binding commitments should be upheld to ensure AI benefits are widely distributed.

Strengthen OpenAI’s Governance

Establish robust governance structures that ensure the nonprofit maintains meaningful control and oversight in alignment with its mission to benefit humanity.

Choose a structure for the for-profit subsidiary that allows the mission to take priority over the interests of investors, rather than requiring these interests to be balanced. Ensure that OpenAI’s mission-related provisions, including the profit caps, the stop-and-assist commitment, and the entitlement of the OpenAI nonprofit to AGI technology, are preserved in any restructuring.

Implement binding provisions to preserve these structures that cannot easily be altered, preventing their erosion through future restructuring or investor pressure and addressing patterns of weakened commitments over time.

Allow the public greater visibility into OpenAI’s corporate structure, founding agreements, investment terms, and the results of key internal reports, including the investigation into Altman’s firing and the upcoming nonprofit commission report.

Rebuild OpenAI’s safety culture by honoring old commitments, including the promise to dedicate 20% of computing resources to alignment research, and implement stronger preparedness, alignment, and AGI readiness teams and policies.

Implement Robust Oversight Mechanisms

Establish oversight systems that provide meaningful checks and balances on AI development and deployment decisions.

Establish a third-party review of OpenAI’s safety and security practices, governance structures, leadership, and adherence to stated commitments, with findings made available to the public.

Implement clear protocols for documenting and responding to safety incidents, governance failures, or departures from previous commitments. 

Establish clear decision-making frameworks for how the organization will handle increasingly powerful AI systems, including when and how to involve external oversight or postpone further development and deployment.

Ensure that the OpenAI nonprofit is explicitly granted the ability to oversee and veto development and deployment decisions on the basis of safety, security, or the public interest at large.

Elevate Leadership Standards

Recruit and maintain leadership of high integrity, commensurate with the profound responsibility of developing advanced AI systems that could transform humanity.

Implement clear, publicly-disclosed standards for all leadership positions that address conflicts of interest and ensure a commitment to responsible AI development for the public good.

Require leadership to identify and resolve financial interests that could compromise decision-making, with ongoing monitoring and public disclosure requirements to prevent the emergence of problematic incentives.

Ensure the OpenAI board has ample board representation of experts in AI safety, information security, public policy, and civil society.

Launch a new investigation into Altman's leadership, including the potential conflicts of interest, and integrity incidents that are detailed in this report. Ensure this investigation has a comprehensive scope, covering even incidents unrelated to his firing in 2023, and incidents that have come to light since then.

Foster a Culture of Honest Feedback

Create robust mechanisms for whistleblower protection, critical feedback, and meaningful engagement with diverse stakeholders to ensure responsible AI development.

Implement a comprehensive whistleblower protection program with anonymous reporting channels that go directly to the board (rather than the status quo of going through intermediaries and only being escalated on occasion).

Create structured engagement processes between OpenAI, governments, and civil society organizations. Publicly document the nature and extent of these processes.

Develop transparent processes for soliciting, documenting, and responding to critical feedback, with regular public reporting on concerns raised and how the organization is responding.

Engage in regular third-party audits of OpenAI’s internal processes, safety guardrails, risk evaluation procedures, and leadership performance. Publicly release the results of these audits.

Preserve Profit Caps and Public Benefit

Maintain the profit caps and financial structures originally established to ensure that the vast majority of value created flows to humanity rather than private interests.

Restore the profit caps on the for-profit subsidiary, with excess returns directed to the nonprofit's public mission.

Publicly disclose the nature and value of today’s profit caps and any changes made to them, ensuring stakeholders can assess whether organizational commitments are being upheld.

Reverse the decision to allow the profit caps to rise by 20% annually starting in 2025, which would render the profit caps meaningless.

Create binding provisions that prevent any future restructuring from undermining profit caps or other financial safeguards designed to preserve public benefit.

The OpenAI Files is the most comprehensive collection to date of publicly documented concerns with governance practices, leadership integrity, and organizational culture at OpenAI.

© 2025 The Midas Project & The Tech Oversight Project. All rights reserved.

The OpenAI Files was created with complete editorial independence. We have received no funding, editorial direction, assistance, or support of any kind from Elon Musk, xAI, Anthropic, Meta, Google, Microsoft, or any other OpenAI competitor. This report is guided solely by our commitment to corporate accountability and public interest research.

The OpenAI Files is the most comprehensive collection to date of publicly documented concerns with governance practices, leadership integrity, and organizational culture at OpenAI.

© 2025 The Midas Project & The Tech Oversight Project. All rights reserved.

The OpenAI Files was created with complete editorial independence. We have received no funding, editorial direction, assistance, or support of any kind from Elon Musk, xAI, Anthropic, Meta, Google, Microsoft, or any other OpenAI competitor. This report is guided solely by our commitment to corporate accountability and public interest research.

© 2025 The Midas Project & The Tech Oversight Project.

All rights reserved.

The OpenAI Files was created with complete editorial independence. We have received no funding, editorial direction, assistance, or support of any kind from Elon Musk, xAI, Anthropic, Meta, Google, Microsoft, or any other OpenAI competitor. This report is guided solely by our commitment to corporate accountability and public interest research.

The OpenAI Files is the most comprehensive collection to date of publicly documented concerns with governance practices, leadership integrity, and organizational culture at OpenAI.