Cited Excerpts from Empire of AI by Karen Hao

“To people around them, the Amodei siblings would describe Altman’s tactics as ‘gaslighting’ and ‘psychological abuse.’”

“Before the evaluations had meaningfully started, however, Altman had insisted on keeping the schedule: ‘On May 9, we launch Scallion,’ the safety researcher quoted Altman saying. This was not just worrying for Preparedness but for all of OpenAI’s safety procedures, including red teaming and alignment.” 

“Altman had made each of OpenAI’s decisions about the Microsoft deal and GPT-3’s deployment a foregone conclusion, but he had maneuvered and manipulated dissenters into believing they had a real say until it was too late to change course.”

“A month later, on July 22, 2019, Microsoft announced its $1 billion investment. Under the terms of the deal, its returns would be capped at 20x.”

“In private settings and with close friends, he still showed flashes of anger and frustration. In public ones and with acquaintances, he embodied the nice guy. He readily gave people credit for things and texted in all lowercase with lots of smiley and frowny faces. He gave employees his personal number, encouraging them to reach out at any time and responding to their feedback with impressive attentiveness. He avoided expressing negative emotions, avoided confrontation, avoided saying no to people. Once when OpenAI fired an employee, he reached out personally to offer ketamine and booze as consolation. ‘I think all of Sam’s relationships end in a good way whether you want it to or not,’ the employee says.”

“‘I don’t think Sam is the guy who should have the finger on the button for AGI,’ [Sutskever] said, and noted the ‘tremendous opportunity’ that had befallen the board to do something about it. He’d then suggested a path forward: replace Altman with Murati as an interim CEO. Later, as Toner, McCauley, and D’Angelo all conferred with one another, they realized that Murati had also said, ‘I don’t feel comfortable about Sam leading us to AGI.’ The revelation would have a huge influence on their thinking. If two of Altman’s most senior deputies—one from Applied and one from Safety—both felt this way, the board had a serious problem. Then, on October 24, Toner had had a meeting with D’Angelo and McCauley to discuss steps they could continue to take to shore up the board’s oversight mechanisms. One glaring issue: OpenAI’s nonprofit didn’t have sufficient independent legal support, and everything was being routed through the for-profit lawyers. The three agreed that it was time to find new nonprofit lawyers who could be present at every board meeting and help review all of the deals and other legal arrangements that Altman was striking”

When it came to establishing the new oversight mechanisms, which included different channels for increasing the board’s visibility into the company’s safety and security practices, the independent directors were also left with a similar feeling that they weren’t a priority for Altman. Early in McCauley’s tenure as a director, Altman had designated her the board’s employee liaison and advocate; she subsequently met with employees regularly by holding office hours. Once she had also brought her husband, actor Joseph Gordon-Levitt, to a company off-site, where he’d listened intently to technical presentations. But during the pandemic, those meetings had petered out. Afterward, McCauley continued to keep some regular meetings, but the open office hours never restarted.

The OpenAI Files is the most comprehensive collection to date of publicly documented concerns with governance practices, leadership integrity, and organizational culture at OpenAI.

© 2025 The Midas Project & The Tech Oversight Project. All rights reserved.

The OpenAI Files was created with complete editorial independence. We have received no funding, editorial direction, assistance, or support of any kind from Elon Musk, xAI, Anthropic, Meta, Google, Microsoft, or any other OpenAI competitor. This report is guided solely by our commitment to corporate accountability and public interest research.

© 2025 The Midas Project & The Tech Oversight Project.

All rights reserved.

The OpenAI Files was created with complete editorial independence. We have received no funding, editorial direction, assistance, or support of any kind from Elon Musk, xAI, Anthropic, Meta, Google, Microsoft, or any other OpenAI competitor. This report is guided solely by our commitment to corporate accountability and public interest research.

The OpenAI Files is the most comprehensive collection to date of publicly documented concerns with governance practices, leadership integrity, and organizational culture at OpenAI.

The OpenAI Files is the most comprehensive collection to date of publicly documented concerns with governance practices, leadership integrity, and organizational culture at OpenAI.

© 2025 The Midas Project & The Tech Oversight Project. All rights reserved.

The OpenAI Files was created with complete editorial independence. We have received no funding, editorial direction, assistance, or support of any kind from Elon Musk, xAI, Anthropic, Meta, Google, Microsoft, or any other OpenAI competitor. This report is guided solely by our commitment to corporate accountability and public interest research.