Profit Cap Changes
1
2019: OpenAI announces that profit for investors will be capped at 100x ROI initially, with that factor decreasing as time goes on.
2
2019: Microsoft enters a deal with OpenAI that will cap Microsoft’s profits at 20x.
3
2021: Altman says in an interview that the cap for investors is now “single digits.”
4
2023: It is offhandedly mentioned in third-party reporting that OpenAI quietly changed its rules, allowing its profit cap to rise by 20% every year starting in 2025.
5
2025: OpenAI announces its intention to transition to a for-profit corporation without profit caps.
Oversight Changes
1
2015: OpenAI is announced as a nonprofit organization with a duty to benefit humanity instead of pursuing profit motives.
2
2019: OpenAI announces a “capped-profit” subsidiary that is fully overseen by its nonprofit board.
3
4
2024: OpenAI announces efforts to restructure, ending the legal primacy of its charitable mission above shareholder interests.
5
2025: OpenAI announces plans to preserve a weak version of nonprofit control, while continuing to transition to a new structure without a duty to put its mission above the interests of shareholders.
In 2015, OpenAI was founded as a nonprofit organization with the mission not to build artificial general intelligence (AGI), but rather to make sure AGI benefits humanity. In 2019, they shifted to a hybrid structure comprising a nonprofit parent entity and a for-profit subsidiary.
The OpenAI nonprofit serves as the parent organization controlling the for-profit subsidiary OpenAI Global, LLC through a complex structure of affiliated companies. The nonprofit is tasked with furthering OpenAI's mission to ensure AGI benefits all of humanity. Moreover, the nonprofit currently has full decision-making authority over the organization's direction and deployment of AGI technology.
This diagram is not certain, and should be viewed as a best guess assembled from recent court filings and third-party reporting. It was created without access to any privileged or non-public information. Sources: OpenAI website, Nonprofit Law blog, Musk v. Altman Corporate Disclosure, NYT v. OpenAI corporate disclosure, Musk v. Altman complaint, Semafor, Windows Central, The Wall Street Journal, Financial Times
So far, the for-profit subsidiary has operated under a “capped-profit” model, which limits the financial returns for investors and employees—something OpenAI’s founders thought would be important if AGI generates as much wealth as they suspected it could. The cap was first set at 100 times the initial investment. This has meant that investors can only earn up to $100 for every $1 they invest, and after that, all profits are redirected back to the nonprofit (and thus the benefit of humanity). Since then, OpenAI has claimed the profit cap dropped—first to 20x, and then to “single digits.”
Now the profit caps stand to be abandoned entirely (allowing investors theoretically unlimited returns), along with the primacy of OpenAI’s mission. While OpenAI claims to have reversed course on earlier plans to abandon nonprofit control, it appears they may still substantially disempower the nonprofit in their upcoming restructuring: transitioning from complete managerial control of the for-profit enterprise to a weaker form of control, namely the ability to appoint people to a newly formed board of the for-profit company.
However, unlike the original 2019 restructuring that explicitly ensured commercial goals were subordinate to the nonprofit's mission, public benefit corporations have no legal requirement to prioritize public benefit over profit. Instead, they have a legal obligation to consider shareholder interests alongside their public benefit mission. Worse yet, in the years since the Delaware Public Benefit Corporation statute was enacted, there have been no reported cases of a shareholder successfully suing to protect the public benefit mission.
By forfeiting both the profit caps and much of their power to oversee their for-profit subsidiary, the OpenAI nonprofit’s disempowerment would mean that profit motives are no longer subordinate to OpenAI’s charitable purpose, that humanity is no longer entitled to all revenue above the profit caps, and that leadership would no longer have a fiduciary duty to humanity.
Raising by 20% a year?
In 2023, it was reported that the profit caps would increase by 20% every year, starting in 2025. This change was not disclosed by OpenAI itself but instead was revealed through reporting by The Information and The Economist. OpenAI continued to take credit for its capped-profit structure while not acknowledging this substantial modification.
20% is nearly seven times the average annual rate of economic growth. A 20% annual increase means the profit cap will double every five years. If the cap were valued at $100 billion today, it would grow over 1000x to over $100,000 billion within forty years—roughly equivalent to the world's current annual economic output. Within a single lifetime, the profit cap designed to ensure AGI benefits humanity at large rather than a handful of investors would become functionally meaningless.
Past vs. Present
OpenAI’s justification for removing the profit caps is that there are now multiple companies competing to develop AGI, so the structure is less fitting than it would be in a world with a single company building AGI:
OpenAI has argued that the profit caps were implemented in anticipation of a single AGI company leading the industry, as opposed to many. However, their own documents from the time contradict this narrative.
At the time of OpenAI’s founding, internal emails show that their primary concern was developing AGI before Google did, who they then saw as the most likely company to build AGI. OpenAI’s 2018 Charter, published a year before the profit caps were implemented, included a “stop-and-assist” commitment (another safeguard which may be eliminated in the upcoming restructuring) which explicitly acknowledged that they anticipated multiple AGI companies racing to develop the technology. Therefore, the capped profit structure, implemented only a year after the stop-and-assist commitment, seemingly was not designed in anticipation of a world with one single AGI company, despite OpenAI’s attempt at creating a revisionist history.
Still, we can try taking them at their word: Maybe they now think it’s less likely that one single company will own most future economic value, and more likely that a handful will, and thus it is okay to abandon the profit caps. But a handful of companies owning the returns of most future economic activity still may not benefit humanity at large, and OpenAI’s profit caps could help mitigate this by assigning the public the right to some percentage of the potentially vast wealth derived from AGI.
Perhaps it was implicit in their thinking that, since many companies might capture the total market for AGI, OpenAI in particular won’t actually reach their profit cap; but if they didn’t expect to hit the cap, why would it now be important for investors, and thus for OpenAI, that the profit cap be removed?
OpenAI might argue that they are replacing the profit caps with conventional equity granted to the nonprofit as shares in the new public benefit corporation, and thus the nonprofit is still entitled to the returns from AGI—equally so, they might argue, if the ownership stake is deemed fair by the nonprofit board and external advisors. But this may be misleading; the fierce pressure investors are putting on OpenAI to remove the profit caps should reveal how bad a deal this could be for humanity.
If the caps are removed, in a world where OpenAI fails to turn a profit and commercialize its technology, the nonprofit’s stake goes to zero. In a world where OpenAI succeeds in its potential to, as CEO Sam Altman put it, “capture the light cone of all future value in the universe,” the public will be entitled to significantly less of this value, and investors will be entitled to more.
There’s a simple way to understand this situation: The profit caps were implemented because OpenAI took seriously the unprecedented economic value of AGI, and they thought that a 100x return on investment was more than enough for investors, while humanity at large (which, as OpenAI president Greg Brockman has pointed out, bears most of the risks of AGI development) fundamentally deserves the lion’s share of its benefits. Nothing about the past half decade has changed this.
What has changed is that investors like SoftBank have made their continued funding of OpenAI conditional on removing these profit caps, creating precisely the kind of undue private influence the caps were designed to prevent. Originally conceived as a safeguard against letting private investment dictate humanity's future, the profit caps are now being sacrificed for the sake of attracting private investment. This represents a complete inversion of OpenAI's founding principles; the mechanism meant to protect against disproportionate consideration of investors’ interests is being eliminated to satisfy investors’ interests.