CEO Integrity

Our research has identified a pattern of concerning behaviors that raise questions about Altman's integrity and whether he is a good fit for overseeing OpenAI. These patterns include discrepancies between public statements and private realities, demonstrations of questionable judgment, and reports of manipulation and abuse.

CEO Integrity

Our research has identified a pattern of concerning behaviors that raise questions about Altman's integrity and whether he is a good fit for overseeing OpenAI. These patterns include discrepancies between public statements and private realities, demonstrations of questionable judgment, and reports of manipulation and abuse.

CEO Integrity

Our research has identified a pattern of concerning behaviors that raise questions about Altman's integrity and whether he is a good fit for overseeing OpenAI. These patterns include discrepancies between public statements and private realities, demonstrations of questionable judgment, and reports of manipulation and abuse.

Image for Nonprofit Board Member

March 15, 2024

The Economist

Sam Altman is a visionary with a trustworthiness problem

In any organisation a CEO who does not seem fully trustworthy is a problem. This is particularly so at the helm of a firm like OpenAI, which is building potentially Promethean technologies.

Image for Nonprofit Board Member

March 15, 2024

The Economist

Sam Altman is a visionary with a trustworthiness problem

In any organisation a CEO who does not seem fully trustworthy is a problem. This is particularly so at the helm of a firm like OpenAI, which is building potentially Promethean technologies.

Altman denied knowledge of equity-threatening non-disparagement agreements, despite personally authorizing them.

2023 – 2024

When OpenAI's restrictive non-disparagement agreements became public—contracts that threatened to strip departing employees of potentially millions in vested equity for any criticism of the company—Altman publicly apologized and claimed ignorance of the policy. However, Vox obtained incorporation documents from April 2023 bearing Altman's signature that explicitly authorized the equity clawback provisions, directly contradicting his claim of ignorance.

Meanwhile, according to documents provided to Vox by ex-employees, the incorporation documents for the holding company that handles equity in OpenAI contains multiple passages with language that gives the company near-arbitrary authority to claw back equity from former employees or — just as importantly — block them from selling it. Those incorporation documents were signed on April 10, 2023, by Sam Altman in his capacity as CEO of OpenAI.

—Vox, 5/22/24 

View Source

Altman denied knowledge of equity-threatening non-disparagement agreements, despite personally authorizing them.

2023 – 2024

When OpenAI's restrictive non-disparagement agreements became public—contracts that threatened to strip departing employees of potentially millions in vested equity for any criticism of the company—Altman publicly apologized and claimed ignorance of the policy. However, Vox obtained incorporation documents from April 2023 bearing Altman's signature that explicitly authorized the equity clawback provisions, directly contradicting his claim of ignorance.

Meanwhile, according to documents provided to Vox by ex-employees, the incorporation documents for the holding company that handles equity in OpenAI contains multiple passages with language that gives the company near-arbitrary authority to claw back equity from former employees or — just as importantly — block them from selling it. Those incorporation documents were signed on April 10, 2023, by Sam Altman in his capacity as CEO of OpenAI.

—Vox, 5/22/24 

Altman denied knowledge of equity-threatening non-disparagement agreements, despite personally authorizing them.

2023 – 2024

When OpenAI's restrictive non-disparagement agreements became public—contracts that threatened to strip departing employees of potentially millions in vested equity for any criticism of the company—Altman publicly apologized and claimed ignorance of the policy. However, Vox obtained incorporation documents from April 2023 bearing Altman's signature that explicitly authorized the equity clawback provisions, directly contradicting his claim of ignorance.

Meanwhile, according to documents provided to Vox by ex-employees, the incorporation documents for the holding company that handles equity in OpenAI contains multiple passages with language that gives the company near-arbitrary authority to claw back equity from former employees or — just as importantly — block them from selling it. Those incorporation documents were signed on April 10, 2023, by Sam Altman in his capacity as CEO of OpenAI.

—Vox, 5/22/24 

View Source

Senior OpenAI executives Mira Murati and Ilya Sutskever lost confidence in Altman over a pattern of dishonesty and manipulation.

2023 – 2025

December 8, 2023

The Washington Post

OpenAI leaders warned of abusive behavior before Sam Altman’s ouster

The senior employees described Altman as psychologically abusive, creating chaos at the artificial-intelligence start-up — complaints that were a major factor in the board’s abrupt decision to fire the CEO

Nitasha Tiku

Read Full Article

Two of Altman's closest collaborators—Chief Technical Officer Mira Murati and Chief Scientist Ilya Sutskever—both concluded that Altman was acting in dishonest and manipulative ways. Their concerns were so serious that Sutskever provided the board with a self-destructing PDF containing Slack screenshots that documented dozens of examples of lying or other toxic behavior. According to reporting from The Atlantic, Murati herself said, “I don’t feel comfortable about Sam leading us to AGI,” while Sutskever said, “I don’t think Sam is the guy who should have the finger on the button for AGI.” The Atlantic also reports that "at least five other people within one to two levels of Altman" gave the board similar feedback.

In [Murati’s] experience, Altman had a simple playbook: first, say whatever he needed to say to get you to do what he wanted, and second, if that didn’t work, undermine you or destroy your credibility … It had taken Sutskever years to be able to put his finger on Altman’s pattern of behavior—how OpenAI’s CEO would tell him one thing, then say another and act as if the difference was an accident. “Oh, I must have misspoken,” Altman would say. Sutskever felt that Altman was dishonest and causing chaos, which would be a problem for any CEO, but especially for one in charge of such potentially civilization-altering technology.

—The Optimist, 5/20/25

View Source

Senior OpenAI executives Mira Murati and Ilya Sutskever lost confidence in Altman over a pattern of dishonesty and manipulation.

2023 – 2025

December 8, 2023

The Washington Post

OpenAI leaders warned of abusive behavior before Sam Altman’s ouster

The senior employees described Altman as psychologically abusive, creating chaos at the artificial-intelligence start-up — complaints that were a major factor in the board’s abrupt decision to fire the CEO

Nitasha Tiku

Read Full Article

Two of Altman's closest collaborators—Chief Technical Officer Mira Murati and Chief Scientist Ilya Sutskever—both concluded that Altman was acting in dishonest and manipulative ways. Their concerns were so serious that Sutskever provided the board with a self-destructing PDF containing Slack screenshots that documented dozens of examples of lying or other toxic behavior. According to reporting from The Atlantic, Murati herself said, “I don’t feel comfortable about Sam leading us to AGI,” while Sutskever said, “I don’t think Sam is the guy who should have the finger on the button for AGI.” The Atlantic also reports that "at least five other people within one to two levels of Altman" gave the board similar feedback.

In [Murati’s] experience, Altman had a simple playbook: first, say whatever he needed to say to get you to do what he wanted, and second, if that didn’t work, undermine you or destroy your credibility … It had taken Sutskever years to be able to put his finger on Altman’s pattern of behavior—how OpenAI’s CEO would tell him one thing, then say another and act as if the difference was an accident. “Oh, I must have misspoken,” Altman would say. Sutskever felt that Altman was dishonest and causing chaos, which would be a problem for any CEO, but especially for one in charge of such potentially civilization-altering technology.

—The Optimist, 5/20/25

Senior OpenAI executives Mira Murati and Ilya Sutskever lost confidence in Altman over a pattern of dishonesty and manipulation.

2023 – 2025

December 8, 2023

The Washington Post

OpenAI leaders warned of abusive behavior before Sam Altman’s ouster

The senior employees described Altman as psychologically abusive, creating chaos at the artificial-intelligence start-up — complaints that were a major factor in the board’s abrupt decision to fire the CEO

Nitasha Tiku

Read Full Article

Two of Altman's closest collaborators—Chief Technical Officer Mira Murati and Chief Scientist Ilya Sutskever—both concluded that Altman was acting in dishonest and manipulative ways. Their concerns were so serious that Sutskever provided the board with a self-destructing PDF containing Slack screenshots that documented dozens of examples of lying or other toxic behavior. According to reporting from The Atlantic, Murati herself said, “I don’t feel comfortable about Sam leading us to AGI,” while Sutskever said, “I don’t think Sam is the guy who should have the finger on the button for AGI.” The Atlantic also reports that "at least five other people within one to two levels of Altman" gave the board similar feedback.

In [Murati’s] experience, Altman had a simple playbook: first, say whatever he needed to say to get you to do what he wanted, and second, if that didn’t work, undermine you or destroy your credibility … It had taken Sutskever years to be able to put his finger on Altman’s pattern of behavior—how OpenAI’s CEO would tell him one thing, then say another and act as if the difference was an accident. “Oh, I must have misspoken,” Altman would say. Sutskever felt that Altman was dishonest and causing chaos, which would be a problem for any CEO, but especially for one in charge of such potentially civilization-altering technology.

—The Optimist, 5/20/25

View Source

For years, Altman seemingly concealed his ownership of the OpenAI Startup Fund from board members.

2021 – 2023

For years, Altman was the owner of the OpenAI Startup Fund, an entity that benefited from its OpenAI affiliation (including early access to OpenAI products) and which was structured in such a way that typically would have entailed financial benefits for Altman.

Then, one night in the summer of 2023, an OpenAI board member overheard a person at a dinner party discussing how inappropriate it was that returns from OpenAI’s Startup Fund were not going to OpenAI investors, given how the fund was able to use scarce resources such as early access to OpenAI products. This was also news to the board. What was this person talking about? OpenAI did launch a startup fund in 2021, which it said at the time would be “managed” by OpenAI, with investment from Microsoft and others. The fund had invested in the AI-driven legal startup Harvey and a handful of other companies. But what was this about returns not going to OpenAI shareholders? The board began to ask Altman about it, and over the course of months of going back and forth, eventually learned that Altman owned the fund personally, and had been raising money from LPs to fund it. In a normal arrangement, that would mean that Altman got carried interest, or “carry”—the fees and profit share that normally accrue to the creator of a fund. OpenAI has said Altman had no financial stake in the fund, and had just set it up personally because it was the fastest way to do it. (Initial answers to the board had been about the tax advantages of this structure.) But that would make a weirdly structured fund even weirder. The independent board members felt like they could not get a straight answer. They also felt they should have been informed ahead of time, given Altman’s repeated claim to not have a stake in OpenAI—and how crucial this status was to his ability to serve on the board at all.

—The Optimist, 5/20/25

View Source

For years, Altman seemingly concealed his ownership of the OpenAI Startup Fund from board members.

2021 – 2023

For years, Altman was the owner of the OpenAI Startup Fund, an entity that benefited from its OpenAI affiliation (including early access to OpenAI products) and which was structured in such a way that typically would have entailed financial benefits for Altman.

Then, one night in the summer of 2023, an OpenAI board member overheard a person at a dinner party discussing how inappropriate it was that returns from OpenAI’s Startup Fund were not going to OpenAI investors, given how the fund was able to use scarce resources such as early access to OpenAI products. This was also news to the board. What was this person talking about? OpenAI did launch a startup fund in 2021, which it said at the time would be “managed” by OpenAI, with investment from Microsoft and others. The fund had invested in the AI-driven legal startup Harvey and a handful of other companies. But what was this about returns not going to OpenAI shareholders? The board began to ask Altman about it, and over the course of months of going back and forth, eventually learned that Altman owned the fund personally, and had been raising money from LPs to fund it. In a normal arrangement, that would mean that Altman got carried interest, or “carry”—the fees and profit share that normally accrue to the creator of a fund. OpenAI has said Altman had no financial stake in the fund, and had just set it up personally because it was the fastest way to do it. (Initial answers to the board had been about the tax advantages of this structure.) But that would make a weirdly structured fund even weirder. The independent board members felt like they could not get a straight answer. They also felt they should have been informed ahead of time, given Altman’s repeated claim to not have a stake in OpenAI—and how crucial this status was to his ability to serve on the board at all.

—The Optimist, 5/20/25

For years, Altman seemingly concealed his ownership of the OpenAI Startup Fund from board members.

2021 – 2023

For years, Altman was the owner of the OpenAI Startup Fund, an entity that benefited from its OpenAI affiliation (including early access to OpenAI products) and which was structured in such a way that typically would have entailed financial benefits for Altman.

Then, one night in the summer of 2023, an OpenAI board member overheard a person at a dinner party discussing how inappropriate it was that returns from OpenAI’s Startup Fund were not going to OpenAI investors, given how the fund was able to use scarce resources such as early access to OpenAI products. This was also news to the board. What was this person talking about? OpenAI did launch a startup fund in 2021, which it said at the time would be “managed” by OpenAI, with investment from Microsoft and others. The fund had invested in the AI-driven legal startup Harvey and a handful of other companies. But what was this about returns not going to OpenAI shareholders? The board began to ask Altman about it, and over the course of months of going back and forth, eventually learned that Altman owned the fund personally, and had been raising money from LPs to fund it. In a normal arrangement, that would mean that Altman got carried interest, or “carry”—the fees and profit share that normally accrue to the creator of a fund. OpenAI has said Altman had no financial stake in the fund, and had just set it up personally because it was the fastest way to do it. (Initial answers to the board had been about the tax advantages of this structure.) But that would make a weirdly structured fund even weirder. The independent board members felt like they could not get a straight answer. They also felt they should have been informed ahead of time, given Altman’s repeated claim to not have a stake in OpenAI—and how crucial this status was to his ability to serve on the board at all.

—The Optimist, 5/20/25

View Source

Altman appeared to have downplayed the extent of his financial interests in OpenAI while giving testimony under oath to the U.S. Senate.

2021 – 2023

Along with his undisclosed ownership of the OpenAI startup fund, Altman has also held OpenAI equity indirectly via two separate investment funds: one through Sequoia, and one through Y Combinator. These two indirect ownership stakes were also not immediately apparent to the public, with many outlets reporting, following Altman’s sworn testimony to the U.S. Senate that he had no financial interest in OpenAI.

OpenAI CEO Sam Altman sat before Congress in 2023 to testify about the dangers of AI. He told American lawmakers at the time that he owns no equity in OpenAI, something he’s said many times, claiming he just runs the company because he loves it. However, Altman recently said he actually did have some equity in OpenAI through a Sequoia fund at one point, a stake he has since sold.

—TechCrunch, 12/19/24

View Source

Sources: TechCrunch

Altman appeared to have downplayed the extent of his financial interests in OpenAI while giving testimony under oath to the U.S. Senate.

2021 – 2023

Along with his undisclosed ownership of the OpenAI startup fund, Altman has also held OpenAI equity indirectly via two separate investment funds: one through Sequoia, and one through Y Combinator. These two indirect ownership stakes were also not immediately apparent to the public, with many outlets reporting, following Altman’s sworn testimony to the U.S. Senate that he had no financial interest in OpenAI.

OpenAI CEO Sam Altman sat before Congress in 2023 to testify about the dangers of AI. He told American lawmakers at the time that he owns no equity in OpenAI, something he’s said many times, claiming he just runs the company because he loves it. However, Altman recently said he actually did have some equity in OpenAI through a Sequoia fund at one point, a stake he has since sold.

—TechCrunch, 12/19/24

Sources: TechCrunch

Altman appeared to have downplayed the extent of his financial interests in OpenAI while giving testimony under oath to the U.S. Senate.

2021 – 2023

Along with his undisclosed ownership of the OpenAI startup fund, Altman has also held OpenAI equity indirectly via two separate investment funds: one through Sequoia, and one through Y Combinator. These two indirect ownership stakes were also not immediately apparent to the public, with many outlets reporting, following Altman’s sworn testimony to the U.S. Senate that he had no financial interest in OpenAI.

OpenAI CEO Sam Altman sat before Congress in 2023 to testify about the dangers of AI. He told American lawmakers at the time that he owns no equity in OpenAI, something he’s said many times, claiming he just runs the company because he loves it. However, Altman recently said he actually did have some equity in OpenAI through a Sequoia fund at one point, a stake he has since sold.

—TechCrunch, 12/19/24

View Source

Sources: TechCrunch

Altman was allegedly forced out of his role at Y Combinator, accused of prioritizing personal enrichment and frequent absenteeism.

2019

The Washington Post reported that Y Combinator founder Paul Graham flew from the UK to San Francisco to personally dismiss Altman after growing concerned that Altman was neglecting his responsibilities to focus on personal priorities, particularly OpenAI (though Graham would later dispute this framing).

Altman had developed a reputation for favoring personal priorities over official duties … a separate concern, unrelated to his initial firing, was that Altman personally invested in start-ups he discovered through the incubator using a fund he created with his brother Jack — a kind of double-dipping for personal enrichment that was practiced by other founders and later limited by the organization.

—The Washington Post, 11/22/23

View Source

Livingston had been surprised and hurt when she learned the full extent of Altman’s moonlighting for OpenAI while ostensibly running YC. For years afterward, they did not speak. Graham was also angry, but quicker to forgive. Still, he regretted not making clear from the outset that he expected Altman to give YC his full attention. Altman’s ambition had turned out to be a double-edged sword.

—The Optimist, 5/20/25

View Source

Altman was allegedly forced out of his role at Y Combinator, accused of prioritizing personal enrichment and frequent absenteeism.

2019

The Washington Post reported that Y Combinator founder Paul Graham flew from the UK to San Francisco to personally dismiss Altman after growing concerned that Altman was neglecting his responsibilities to focus on personal priorities, particularly OpenAI (though Graham would later dispute this framing).

Altman had developed a reputation for favoring personal priorities over official duties … a separate concern, unrelated to his initial firing, was that Altman personally invested in start-ups he discovered through the incubator using a fund he created with his brother Jack — a kind of double-dipping for personal enrichment that was practiced by other founders and later limited by the organization.

—The Washington Post, 11/22/23

Livingston had been surprised and hurt when she learned the full extent of Altman’s moonlighting for OpenAI while ostensibly running YC. For years afterward, they did not speak. Graham was also angry, but quicker to forgive. Still, he regretted not making clear from the outset that he expected Altman to give YC his full attention. Altman’s ambition had turned out to be a double-edged sword.

—The Optimist, 5/20/25

Altman was allegedly forced out of his role at Y Combinator, accused of prioritizing personal enrichment and frequent absenteeism.

2019

The Washington Post reported that Y Combinator founder Paul Graham flew from the UK to San Francisco to personally dismiss Altman after growing concerned that Altman was neglecting his responsibilities to focus on personal priorities, particularly OpenAI (though Graham would later dispute this framing).

Altman had developed a reputation for favoring personal priorities over official duties … a separate concern, unrelated to his initial firing, was that Altman personally invested in start-ups he discovered through the incubator using a fund he created with his brother Jack — a kind of double-dipping for personal enrichment that was practiced by other founders and later limited by the organization.

—The Washington Post, 11/22/23

View Source

Livingston had been surprised and hurt when she learned the full extent of Altman’s moonlighting for OpenAI while ostensibly running YC. For years afterward, they did not speak. Graham was also angry, but quicker to forgive. Still, he regretted not making clear from the outset that he expected Altman to give YC his full attention. Altman’s ambition had turned out to be a double-edged sword.

—The Optimist, 5/20/25

View Source

For years after departing, Altman falsely claimed to be the chairman of Y Combinator.

2019 – 2023

Before his departure as president of Y Combinator, Altman unilaterally announced his promotion to chairman on the firm's website without partnership approval. The announcement was later removed. Despite the retraction, Altman continued falsely listing himself as chairman in SEC filings for years, despite never actually holding the position.

To smooth his exit, Altman proposed he move from president to chairman. He pre-emptively published a blog post on the firm’s website announcing the change. But the firm’s partnership had never agreed, and the announcement was later scrubbed from the post.

—The Wall Street Journal, 12/24/23

View Source

For years after departing, Altman falsely claimed to be the chairman of Y Combinator.

2019 – 2023

Before his departure as president of Y Combinator, Altman unilaterally announced his promotion to chairman on the firm's website without partnership approval. The announcement was later removed. Despite the retraction, Altman continued falsely listing himself as chairman in SEC filings for years, despite never actually holding the position.

To smooth his exit, Altman proposed he move from president to chairman. He pre-emptively published a blog post on the firm’s website announcing the change. But the firm’s partnership had never agreed, and the announcement was later scrubbed from the post.

—The Wall Street Journal, 12/24/23

For years after departing, Altman falsely claimed to be the chairman of Y Combinator.

2019 – 2023

Before his departure as president of Y Combinator, Altman unilaterally announced his promotion to chairman on the firm's website without partnership approval. The announcement was later removed. Despite the retraction, Altman continued falsely listing himself as chairman in SEC filings for years, despite never actually holding the position.

To smooth his exit, Altman proposed he move from president to chairman. He pre-emptively published a blog post on the firm’s website announcing the change. But the firm’s partnership had never agreed, and the announcement was later scrubbed from the post.

—The Wall Street Journal, 12/24/23

View Source

Employees at Altman's first company sought his removal as CEO for "deceptive and chaotic" behavior.

2010s

Senior employees at Altman's first startup Loopt made two separate attempts to have the board fire him as CEO, citing his "deceptive and chaotic" behavior. Complaints included Altman pursuing personal side projects that diverted engineers from the company's main work and failing to tell the truth, even about trivial matters. At one point, senior executives threatened to leave if Altman wasn't removed as CEO.

A group of senior employees at Altman’s first startup, Loopt—a location-based social-media network started in the flip-phone era—twice urged board members to fire him as CEO over what they described as deceptive and chaotic behavior, said people familiar with the matter.

—The Wall Street Journal, 12/24/23

View Source

If [Altman] imagines something to be true, it sort of becomes true in his head … It may or may not lead one to stretch, and that can make people uncomfortable.

—Mark Jacobestein, former COO at Loopt, as quoted in The Wall Street Journal, 12/24/23

View Source

Employees at Altman's first company sought his removal as CEO for "deceptive and chaotic" behavior.

2010s

Senior employees at Altman's first startup Loopt made two separate attempts to have the board fire him as CEO, citing his "deceptive and chaotic" behavior. Complaints included Altman pursuing personal side projects that diverted engineers from the company's main work and failing to tell the truth, even about trivial matters. At one point, senior executives threatened to leave if Altman wasn't removed as CEO.

A group of senior employees at Altman’s first startup, Loopt—a location-based social-media network started in the flip-phone era—twice urged board members to fire him as CEO over what they described as deceptive and chaotic behavior, said people familiar with the matter.

—The Wall Street Journal, 12/24/23

If [Altman] imagines something to be true, it sort of becomes true in his head … It may or may not lead one to stretch, and that can make people uncomfortable.

—Mark Jacobestein, former COO at Loopt, as quoted in The Wall Street Journal, 12/24/23

Employees at Altman's first company sought his removal as CEO for "deceptive and chaotic" behavior.

2010s

Senior employees at Altman's first startup Loopt made two separate attempts to have the board fire him as CEO, citing his "deceptive and chaotic" behavior. Complaints included Altman pursuing personal side projects that diverted engineers from the company's main work and failing to tell the truth, even about trivial matters. At one point, senior executives threatened to leave if Altman wasn't removed as CEO.

A group of senior employees at Altman’s first startup, Loopt—a location-based social-media network started in the flip-phone era—twice urged board members to fire him as CEO over what they described as deceptive and chaotic behavior, said people familiar with the matter.

—The Wall Street Journal, 12/24/23

View Source

If [Altman] imagines something to be true, it sort of becomes true in his head … It may or may not lead one to stretch, and that can make people uncomfortable.

—Mark Jacobestein, former COO at Loopt, as quoted in The Wall Street Journal, 12/24/23

View Source

An independent review conducted after Altman’s firing from OpenAI, and never released to the public, reportedly confirmed Altman’s pattern of lying.

2024

Following Altman’s removal from the board, an independent review was commissioned to review the events leading up to his firing. This report was never publicly released, apparently in part to protect Altman’s image—although board member Larry Summers reportedly admitted that it found “many instances” of Altman “saying different things to different people".

Summers and Taylor hired the law firm WilmerHale to conduct the independent review, during which it said it pored over more than thirty thousand documents and conducted dozens of interviews with the previous board members, executives, and other relevant people and scoped the examination to how the board made its decision to fire Altman. The resulting report was never released to the public or employees. Summers would tell people privately that the investigation had found many instances of Altman saying different things to different people, but to a degree that the new board decided didn’t preclude him from continuing to run the company; it was thus not worthwhile to release any details to sow doubt about Altman’s leadership and risk breaching the confidentiality of people whose testimonials had contributed to the report.

—Empire of AI, 5/20/25

View Source

An independent review conducted after Altman’s firing from OpenAI, and never released to the public, reportedly confirmed Altman’s pattern of lying.

2024

Following Altman’s removal from the board, an independent review was commissioned to review the events leading up to his firing. This report was never publicly released, apparently in part to protect Altman’s image—although board member Larry Summers reportedly admitted that it found “many instances” of Altman “saying different things to different people".

Summers and Taylor hired the law firm WilmerHale to conduct the independent review, during which it said it pored over more than thirty thousand documents and conducted dozens of interviews with the previous board members, executives, and other relevant people and scoped the examination to how the board made its decision to fire Altman. The resulting report was never released to the public or employees. Summers would tell people privately that the investigation had found many instances of Altman saying different things to different people, but to a degree that the new board decided didn’t preclude him from continuing to run the company; it was thus not worthwhile to release any details to sow doubt about Altman’s leadership and risk breaching the confidentiality of people whose testimonials had contributed to the report.

—Empire of AI, 5/20/25

An independent review conducted after Altman’s firing from OpenAI, and never released to the public, reportedly confirmed Altman’s pattern of lying.

2024

Following Altman’s removal from the board, an independent review was commissioned to review the events leading up to his firing. This report was never publicly released, apparently in part to protect Altman’s image—although board member Larry Summers reportedly admitted that it found “many instances” of Altman “saying different things to different people".

Summers and Taylor hired the law firm WilmerHale to conduct the independent review, during which it said it pored over more than thirty thousand documents and conducted dozens of interviews with the previous board members, executives, and other relevant people and scoped the examination to how the board made its decision to fire Altman. The resulting report was never released to the public or employees. Summers would tell people privately that the investigation had found many instances of Altman saying different things to different people, but to a degree that the new board decided didn’t preclude him from continuing to run the company; it was thus not worthwhile to release any details to sow doubt about Altman’s leadership and risk breaching the confidentiality of people whose testimonials had contributed to the report.

—Empire of AI, 5/20/25

View Source

A former OpenAI board member said that Altman made oversight difficult by lying and withholding information.

2024

Helen Toner, a former OpenAI board member, stated that Altman systematically obstructed board governance for years through deliberately withholding information, misrepresenting facts, and outright lying to the board.

But for years, Sam had made it really difficult for the board to actually do [its] job by withholding information. Misrepresenting things that were happening at the company, in some cases outright lying to the board.

—Helen Toner, former OpenAI board member, 5/28/24

View Source

Source: TED

A former OpenAI board member said that Altman made oversight difficult by lying and withholding information.

2024

Helen Toner, a former OpenAI board member, stated that Altman systematically obstructed board governance for years through deliberately withholding information, misrepresenting facts, and outright lying to the board.

But for years, Sam had made it really difficult for the board to actually do [its] job by withholding information. Misrepresenting things that were happening at the company, in some cases outright lying to the board.

—Helen Toner, former OpenAI board member, 5/28/24

Source: TED

A former OpenAI board member said that Altman made oversight difficult by lying and withholding information.

2024

Helen Toner, a former OpenAI board member, stated that Altman systematically obstructed board governance for years through deliberately withholding information, misrepresenting facts, and outright lying to the board.

But for years, Sam had made it really difficult for the board to actually do [its] job by withholding information. Misrepresenting things that were happening at the company, in some cases outright lying to the board.

—Helen Toner, former OpenAI board member, 5/28/24

View Source

Source: TED

Altman sought control over board–employee communications, specifically targeting the board member serving as a designated staff liaison for employees raising concerns.

2023

Tasha McCauley was the board’s designated staff liaison, and to raise concerns, OpenAI employees were supposed to speak with her. This made Altman's demand particularly concerning—he was, in effect, seeking to monitor a channel that employees were supposed to use to report concerns at the company (including concerns about his own conduct).

Other board members already had concerns about Altman’s management. Tasha McCauley, an adjunct senior management scientist at Rand Corp, tried to cultivate relationships with employees as a board member. Past board members chatted regularly with OpenAI executives without informing Altman. Yet during the pandemic, Altman told McCauley he needed to be told if the board spoke to employees, a request that some on the board viewed as Altman limiting the board’s power, people familiar with the matter said.

—The Wall Street Journal, 12/24/23

View Source

Altman sought control over board–employee communications, specifically targeting the board member serving as a designated staff liaison for employees raising concerns.

2023

Tasha McCauley was the board’s designated staff liaison, and to raise concerns, OpenAI employees were supposed to speak with her. This made Altman's demand particularly concerning—he was, in effect, seeking to monitor a channel that employees were supposed to use to report concerns at the company (including concerns about his own conduct).

Other board members already had concerns about Altman’s management. Tasha McCauley, an adjunct senior management scientist at Rand Corp, tried to cultivate relationships with employees as a board member. Past board members chatted regularly with OpenAI executives without informing Altman. Yet during the pandemic, Altman told McCauley he needed to be told if the board spoke to employees, a request that some on the board viewed as Altman limiting the board’s power, people familiar with the matter said.

—The Wall Street Journal, 12/24/23

Altman sought control over board–employee communications, specifically targeting the board member serving as a designated staff liaison for employees raising concerns.

2023

Tasha McCauley was the board’s designated staff liaison, and to raise concerns, OpenAI employees were supposed to speak with her. This made Altman's demand particularly concerning—he was, in effect, seeking to monitor a channel that employees were supposed to use to report concerns at the company (including concerns about his own conduct).

Other board members already had concerns about Altman’s management. Tasha McCauley, an adjunct senior management scientist at Rand Corp, tried to cultivate relationships with employees as a board member. Past board members chatted regularly with OpenAI executives without informing Altman. Yet during the pandemic, Altman told McCauley he needed to be told if the board spoke to employees, a request that some on the board viewed as Altman limiting the board’s power, people familiar with the matter said.

—The Wall Street Journal, 12/24/23

View Source

After returning to the board following his removal, Altman demanded the removal of fellow board members and the appointment of personal allies.

2023

Following his November 2023 dismissal, Sam Altman leveraged his influence during reinstatement negotiations to restructure OpenAI's governance in his favor. Rather than simply returning to his CEO role, Altman made his comeback contingent on removing board members who had fired him and installing his chosen allies as directors.

This is [something] I'm not proud of. I was like: "I will come back if all of you resign right now." I think that was not a constructive thing.

—Sam Altman, 3/12/25

View Source

Altman, who was fired Friday, is open to returning but wants to see governance changes, including the removal of existing board members[.]

—Time, 11/19/23

View Source

[Altman] may impose conditions, according to multiple news reports, including insisting that Microsoft, OpenAI’s biggest investor, take a seat on the board. He also reportedly wants to add other allies as directors.

—CNN, 11/19/23

View Source

After returning to the board following his removal, Altman demanded the removal of fellow board members and the appointment of personal allies.

2023

Following his November 2023 dismissal, Sam Altman leveraged his influence during reinstatement negotiations to restructure OpenAI's governance in his favor. Rather than simply returning to his CEO role, Altman made his comeback contingent on removing board members who had fired him and installing his chosen allies as directors.

This is [something] I'm not proud of. I was like: "I will come back if all of you resign right now." I think that was not a constructive thing.

—Sam Altman, 3/12/25

Altman, who was fired Friday, is open to returning but wants to see governance changes, including the removal of existing board members[.]

—Time, 11/19/23

[Altman] may impose conditions, according to multiple news reports, including insisting that Microsoft, OpenAI’s biggest investor, take a seat on the board. He also reportedly wants to add other allies as directors.

—CNN, 11/19/23

After returning to the board following his removal, Altman demanded the removal of fellow board members and the appointment of personal allies.

2023

Following his November 2023 dismissal, Sam Altman leveraged his influence during reinstatement negotiations to restructure OpenAI's governance in his favor. Rather than simply returning to his CEO role, Altman made his comeback contingent on removing board members who had fired him and installing his chosen allies as directors.

This is [something] I'm not proud of. I was like: "I will come back if all of you resign right now." I think that was not a constructive thing.

—Sam Altman, 3/12/25

View Source

Altman, who was fired Friday, is open to returning but wants to see governance changes, including the removal of existing board members[.]

—Time, 11/19/23

View Source

[Altman] may impose conditions, according to multiple news reports, including insisting that Microsoft, OpenAI’s biggest investor, take a seat on the board. He also reportedly wants to add other allies as directors.

—CNN, 11/19/23

View Source

Altman allegedly lied about board members' views to create internal conflict.

2023

According to former members of the OpenAI board, Altman fabricated claims about certain board members' views, possibly in an attempt to pit them against each other.

Mr. Altman called other board members and said Ms. McCauley wanted Ms. Toner removed from the board, people with knowledge of the conversations said. When board members later asked Ms. McCauley if that was true, she said that was “absolutely false.” … Some board members believed that Mr. Altman was trying to pit them against each other.

—The New York Times, 12/9/23

View Source

Altman allegedly lied about board members' views to create internal conflict.

2023

According to former members of the OpenAI board, Altman fabricated claims about certain board members' views, possibly in an attempt to pit them against each other.

Mr. Altman called other board members and said Ms. McCauley wanted Ms. Toner removed from the board, people with knowledge of the conversations said. When board members later asked Ms. McCauley if that was true, she said that was “absolutely false.” … Some board members believed that Mr. Altman was trying to pit them against each other.

—The New York Times, 12/9/23

Altman allegedly lied about board members' views to create internal conflict.

2023

According to former members of the OpenAI board, Altman fabricated claims about certain board members' views, possibly in an attempt to pit them against each other.

Mr. Altman called other board members and said Ms. McCauley wanted Ms. Toner removed from the board, people with knowledge of the conversations said. When board members later asked Ms. McCauley if that was true, she said that was “absolutely false.” … Some board members believed that Mr. Altman was trying to pit them against each other.

—The New York Times, 12/9/23

View Source

Altman criticized a board member for the content of her independent research.

2023

A former board member alleges that, after publishing a research paper with a section that seemed to criticize OpenAI’s approach to safety, Altman began trying to push her off the board by lying to other board members.

A few weeks before Mr. Altman’s firing, he met with Ms. Toner to discuss a paper she had co-written for the Georgetown [think tank]. Mr. Altman complained that the research paper seemed to criticize OpenAI’s efforts to keep its A.I. technologies safe while praising the approach taken by Anthropic, a company that has become OpenAI’s biggest rival[.]

—The New York Times, 11/21/23

View Source

The problem was that after the paper came out, Sam started lying to other board members in order to try and push me off the board.

—Helen Toner, former OpenAI board member, 5/28/24

View Source

Altman criticized a board member for the content of her independent research.

2023

A former board member alleges that, after publishing a research paper with a section that seemed to criticize OpenAI’s approach to safety, Altman began trying to push her off the board by lying to other board members.

A few weeks before Mr. Altman’s firing, he met with Ms. Toner to discuss a paper she had co-written for the Georgetown [think tank]. Mr. Altman complained that the research paper seemed to criticize OpenAI’s efforts to keep its A.I. technologies safe while praising the approach taken by Anthropic, a company that has become OpenAI’s biggest rival[.]

—The New York Times, 11/21/23

The problem was that after the paper came out, Sam started lying to other board members in order to try and push me off the board.

—Helen Toner, former OpenAI board member, 5/28/24

Altman criticized a board member for the content of her independent research.

2023

A former board member alleges that, after publishing a research paper with a section that seemed to criticize OpenAI’s approach to safety, Altman began trying to push her off the board by lying to other board members.

A few weeks before Mr. Altman’s firing, he met with Ms. Toner to discuss a paper she had co-written for the Georgetown [think tank]. Mr. Altman complained that the research paper seemed to criticize OpenAI’s efforts to keep its A.I. technologies safe while praising the approach taken by Anthropic, a company that has become OpenAI’s biggest rival[.]

—The New York Times, 11/21/23

View Source

The problem was that after the paper came out, Sam started lying to other board members in order to try and push me off the board.

—Helen Toner, former OpenAI board member, 5/28/24

View Source

Altman professed support for AI regulation while lobbying against it.

2015 – 2025

Altman's stated positions on AI regulation have shifted dramatically and contradicted his private actions. While publicly advocating for government oversight in his 2023 Senate testimony, just one month later it was reported that OpenAI was simultaneously lobbying behind the scenes to weaken the EU AI Act. The company also opposed California's SB-1047, legislation designed to address the very AI risks Altman had warned Congress about. By 2025, Altman had completely reversed course, warning regulation could be “disastrous.” Furthermore, OpenAI has supported federal preemption of all state laws concerning AI.

Obviously we'd comply with/aggressively support all regulation.

—Sam Altman, 5/25/15

View Source

Sam Altman, CEO of ChatGPT-maker OpenAI, warned at a Senate hearing Thursday that requiring government approval to release powerful artificial intelligence software would be “disastrous” for the United States’ lead in the technology. It was a striking reversal after his comments at a Senate hearing two years ago, when he listed creating a new agency to license the technology as his “number one” recommendation for making sure AI was safe.

—The Washington Post, 5/8/25

View Source

Altman professed support for AI regulation while lobbying against it.

2015 – 2025

Altman's stated positions on AI regulation have shifted dramatically and contradicted his private actions. While publicly advocating for government oversight in his 2023 Senate testimony, just one month later it was reported that OpenAI was simultaneously lobbying behind the scenes to weaken the EU AI Act. The company also opposed California's SB-1047, legislation designed to address the very AI risks Altman had warned Congress about. By 2025, Altman had completely reversed course, warning regulation could be “disastrous.” Furthermore, OpenAI has supported federal preemption of all state laws concerning AI.

Obviously we'd comply with/aggressively support all regulation.

—Sam Altman, 5/25/15

Sam Altman, CEO of ChatGPT-maker OpenAI, warned at a Senate hearing Thursday that requiring government approval to release powerful artificial intelligence software would be “disastrous” for the United States’ lead in the technology. It was a striking reversal after his comments at a Senate hearing two years ago, when he listed creating a new agency to license the technology as his “number one” recommendation for making sure AI was safe.

—The Washington Post, 5/8/25

Altman professed support for AI regulation while lobbying against it.

2015 – 2025

Altman's stated positions on AI regulation have shifted dramatically and contradicted his private actions. While publicly advocating for government oversight in his 2023 Senate testimony, just one month later it was reported that OpenAI was simultaneously lobbying behind the scenes to weaken the EU AI Act. The company also opposed California's SB-1047, legislation designed to address the very AI risks Altman had warned Congress about. By 2025, Altman had completely reversed course, warning regulation could be “disastrous.” Furthermore, OpenAI has supported federal preemption of all state laws concerning AI.

Obviously we'd comply with/aggressively support all regulation.

—Sam Altman, 5/25/15

View Source

Sam Altman, CEO of ChatGPT-maker OpenAI, warned at a Senate hearing Thursday that requiring government approval to release powerful artificial intelligence software would be “disastrous” for the United States’ lead in the technology. It was a striking reversal after his comments at a Senate hearing two years ago, when he listed creating a new agency to license the technology as his “number one” recommendation for making sure AI was safe.

—The Washington Post, 5/8/25

View Source

Altman has left many employees feeling that he lied to them, acted abusively, or otherwise couldn’t be trusted.
Altman has left many employees feeling that he lied to them, acted abusively, or otherwise couldn’t be trusted.

2015 – 2025

Dozens of former employees left OpenAI feeling disillusioned with the institution or the integrity of Sam Altman in particular. Former senior employees Dario Amodei and Ilya Sutskever both used the term “abuse” to describe Altman’s behavior, and others have said he exhibits a pattern of misleading people and withholding important information.

Many of the publicly available testimonies from these employees are documented on our website.

The OpenAI Files is the most comprehensive collection to date of publicly documented concerns with governance practices, leadership integrity, and organizational culture at OpenAI.

© 2025 The Midas Project & The Tech Oversight Project. All rights reserved.

The OpenAI Files was created with complete editorial independence. We have received no funding, editorial direction, assistance, or support of any kind from Elon Musk, xAI, Anthropic, Meta, Google, Microsoft, or any other OpenAI competitor. This report is guided solely by our commitment to corporate accountability and public interest research.

The OpenAI Files is the most comprehensive collection to date of publicly documented concerns with governance practices, leadership integrity, and organizational culture at OpenAI.

© 2025 The Midas Project & The Tech Oversight Project. All rights reserved.

The OpenAI Files was created with complete editorial independence. We have received no funding, editorial direction, assistance, or support of any kind from Elon Musk, xAI, Anthropic, Meta, Google, Microsoft, or any other OpenAI competitor. This report is guided solely by our commitment to corporate accountability and public interest research.

© 2025 The Midas Project & The Tech Oversight Project.

All rights reserved.

The OpenAI Files was created with complete editorial independence. We have received no funding, editorial direction, assistance, or support of any kind from Elon Musk, xAI, Anthropic, Meta, Google, Microsoft, or any other OpenAI competitor. This report is guided solely by our commitment to corporate accountability and public interest research.

The OpenAI Files is the most comprehensive collection to date of publicly documented concerns with governance practices, leadership integrity, and organizational culture at OpenAI.