Transparency & Safety
Our research has documented various safety incidents, broken promises, and red flags within OpenAI's organizational culture. These incidents span multiple years and, taken together, paint a picture of an organization torn between its mission and commercial incentives.
Our goal here is to provide documentation (of transparency & safety incidents and organizational responses), as well as analysis that can inform appropriate improvements and policy enhancements.
Transparency & Safety
Our research has documented various safety incidents, broken promises, and red flags within OpenAI's organizational culture. These incidents span multiple years and, taken together, paint a picture of an organization torn between its mission and commercial incentives.
Our goal here is to provide documentation (of transparency & safety incidents and organizational responses), as well as analysis that can inform appropriate improvements and policy enhancements.
Transparency & Safety
Our research has documented various safety incidents, broken promises, and red flags within OpenAI's organizational culture. These incidents span multiple years and, taken together, paint a picture of an organization torn between its mission and commercial incentives.
Our goal here is to provide documentation (of transparency & safety incidents and organizational responses), as well as analysis that can inform appropriate improvements and policy enhancements.

July 12, 2024
•
The Washington Post
OpenAI promised to make its AI safe. Employees say it ‘failed’ its first test.
But this spring, some members of OpenAI’s safety team felt pressured to speed through a new testing protocol, designed to prevent the technology from causing catastrophic harm...
Pranshu Verma, Nitasha Tiku, and Cat Zakrzewski
Read Full Article

July 12, 2024
The Washington Post
OpenAI promised to make its AI safe. Employees say it ‘failed’ its first test.
But this spring, some members of OpenAI’s safety team felt pressured to speed through a new testing protocol, designed to prevent the technology from causing catastrophic harm...
Pranshu Verma, Nitasha Tiku, and Cat Zakrzewski
Read Full Article
Former OpenAI employees were pressured into signing highly restrictive non-disclosure and non-disparagement agreements.
2024

May 22, 2024
•
Vox
Leaked OpenAI documents reveal aggressive tactics toward former employees
Has Sam Altman told the truth about OpenAI’s NDA scandal?
Kelsey Piper
Read Full Article
Former OpenAI employees faced highly restrictive non-disclosure and non-disparagement agreements that threatened the loss of their vested equity, potentially worth millions, if they refused to sign or if they later criticized the company, effectively ensuring their silence.
I have seen the extremely restrictive off-boarding agreement that contains nondisclosure and non-disparagement provisions former OpenAI employees are subject to. It forbids them, for the rest of their lives, from criticizing their former employer. Even acknowledging that the NDA exists is a violation of it. If a departing employee declines to sign the document, or if they violate it, they can lose all vested equity they earned during their time at the company, which is likely worth millions of dollars.
—Vox, 5/18/24
View Source
Source: Vox
Former OpenAI employees were pressured into signing highly restrictive non-disclosure and non-disparagement agreements.
2024

May 22, 2024
Vox
Leaked OpenAI documents reveal aggressive tactics toward former employees
Has Sam Altman told the truth about OpenAI’s NDA scandal?
Kelsey Piper
Read Full Article
Former OpenAI employees faced highly restrictive non-disclosure and non-disparagement agreements that threatened the loss of their vested equity, potentially worth millions, if they refused to sign or if they later criticized the company, effectively ensuring their silence.
I have seen the extremely restrictive off-boarding agreement that contains nondisclosure and non-disparagement provisions former OpenAI employees are subject to. It forbids them, for the rest of their lives, from criticizing their former employer. Even acknowledging that the NDA exists is a violation of it. If a departing employee declines to sign the document, or if they violate it, they can lose all vested equity they earned during their time at the company, which is likely worth millions of dollars.
Source: Vox
Former OpenAI employees were pressured into signing highly restrictive non-disclosure and non-disparagement agreements.
2024

May 22, 2024
•
Vox
Leaked OpenAI documents reveal aggressive tactics toward former employees
Has Sam Altman told the truth about OpenAI’s NDA scandal?
Kelsey Piper
Read Full Article
Former OpenAI employees faced highly restrictive non-disclosure and non-disparagement agreements that threatened the loss of their vested equity, potentially worth millions, if they refused to sign or if they later criticized the company, effectively ensuring their silence.
I have seen the extremely restrictive off-boarding agreement that contains nondisclosure and non-disparagement provisions former OpenAI employees are subject to. It forbids them, for the rest of their lives, from criticizing their former employer. Even acknowledging that the NDA exists is a violation of it. If a departing employee declines to sign the document, or if they violate it, they can lose all vested equity they earned during their time at the company, which is likely worth millions of dollars.
—Vox, 5/18/24
View Source
Source: Vox
OpenAI never fulfilled its promise to allocate resources to its own safety team.
2023 – 2025
In 2023, OpenAI promised to commit twenty percent of its computing resources to its newly formed AI safety research team.
However, according to former team lead Jan Leike and reporting from Fortune, OpenAI did not allocate these computing resources to its safety team, which had repeatedly requested compute resources and been denied.
But a half-dozen sources familiar with the Superalignment team’s work said that the group was never allocated this compute. Instead, it received far less in the company’s regular compute allocation budget, which is reassessed quarterly.
—Fortune, 5/21/24
View Source
OpenAI never fulfilled its promise to allocate resources to its own safety team.
2023 – 2025
In 2023, OpenAI promised to commit twenty percent of its computing resources to its newly formed AI safety research team.
However, according to former team lead Jan Leike and reporting from Fortune, OpenAI did not allocate these computing resources to its safety team, which had repeatedly requested compute resources and been denied.
OpenAI never fulfilled its promise to allocate resources to its own safety team.
2023 – 2025
In 2023, OpenAI promised to commit twenty percent of its computing resources to its newly formed AI safety research team.
However, according to former team lead Jan Leike and reporting from Fortune, OpenAI did not allocate these computing resources to its safety team, which had repeatedly requested compute resources and been denied.
But a half-dozen sources familiar with the Superalignment team’s work said that the group was never allocated this compute. Instead, it received far less in the company’s regular compute allocation budget, which is reassessed quarterly.
—Fortune, 5/21/24
View Source
A former OpenAI employee testified to Congress that OpenAI’s security practices were weak.
2024
In his testimony to the United States Senate Judiciary Subcommittee on Privacy, Technology, & the Law, former member of OpenAI’s technical staff William Saunders stated that despite OpenAI’s claims to care about security, the company was highly vulnerable to internal threats.
AGI will also be a valuable target for theft, including by foreign adversaries of the United States. While OpenAI publicly claims to take security seriously, their internal security was not prioritized. When I was at OpenAI, there were long periods of time where there were vulnerabilities that would have allowed me or hundreds of other engineers at the company to bypass access controls and steal the company’s most advanced AI systems, including GPT-4.
—William Saunders' Testimony to the United States Senate Judiciary Subcommittee on Privacy, Technology, & the Law, 9/17/24
View Source
A former OpenAI employee testified to Congress that OpenAI’s security practices were weak.
2024
In his testimony to the United States Senate Judiciary Subcommittee on Privacy, Technology, & the Law, former member of OpenAI’s technical staff William Saunders stated that despite OpenAI’s claims to care about security, the company was highly vulnerable to internal threats.
AGI will also be a valuable target for theft, including by foreign adversaries of the United States. While OpenAI publicly claims to take security seriously, their internal security was not prioritized. When I was at OpenAI, there were long periods of time where there were vulnerabilities that would have allowed me or hundreds of other engineers at the company to bypass access controls and steal the company’s most advanced AI systems, including GPT-4.
A former OpenAI employee testified to Congress that OpenAI’s security practices were weak.
2024
In his testimony to the United States Senate Judiciary Subcommittee on Privacy, Technology, & the Law, former member of OpenAI’s technical staff William Saunders stated that despite OpenAI’s claims to care about security, the company was highly vulnerable to internal threats.
AGI will also be a valuable target for theft, including by foreign adversaries of the United States. While OpenAI publicly claims to take security seriously, their internal security was not prioritized. When I was at OpenAI, there were long periods of time where there were vulnerabilities that would have allowed me or hundreds of other engineers at the company to bypass access controls and steal the company’s most advanced AI systems, including GPT-4.
—William Saunders' Testimony to the United States Senate Judiciary Subcommittee on Privacy, Technology, & the Law, 9/17/24
View Source
OpenAI rushed safety evaluations of its AI models to meet product deadlines.
2023 – 2025
In December 2023, OpenAI published a Preparedness Framework outlining risk assessment processes for new models, including a scorecard system to evaluate potential harms. Under the framework, only models with appropriately low post-mitigation risk scores could be deployed.
OpenAI employees felt pressured to rush through safety evaluations for GPT-4 Omni (internally codenamed as Scallion) in spring 2024, with the company planning release celebrations before the preparedness team could determine if the model was safe.
Before the evaluations had meaningfully started, however, Altman had insisted on keeping the schedule: ‘On May 9, we launch Scallion,’ the safety researcher quoted Altman saying. This was not just worrying for Preparedness but for all of OpenAI’s safety procedures, including red teaming and alignment.
—Empire of AI, 5/20/25
View Source
Sources: OpenAI, Empire of AI by Karen Hao, The Washington Post
OpenAI rushed safety evaluations of its AI models to meet product deadlines.
2023 – 2025
In December 2023, OpenAI published a Preparedness Framework outlining risk assessment processes for new models, including a scorecard system to evaluate potential harms. Under the framework, only models with appropriately low post-mitigation risk scores could be deployed.
OpenAI employees felt pressured to rush through safety evaluations for GPT-4 Omni (internally codenamed as Scallion) in spring 2024, with the company planning release celebrations before the preparedness team could determine if the model was safe.
Before the evaluations had meaningfully started, however, Altman had insisted on keeping the schedule: ‘On May 9, we launch Scallion,’ the safety researcher quoted Altman saying. This was not just worrying for Preparedness but for all of OpenAI’s safety procedures, including red teaming and alignment.
Sources: OpenAI, Empire of AI by Karen Hao, The Washington Post
OpenAI rushed safety evaluations of its AI models to meet product deadlines.
2023 – 2025
In December 2023, OpenAI published a Preparedness Framework outlining risk assessment processes for new models, including a scorecard system to evaluate potential harms. Under the framework, only models with appropriately low post-mitigation risk scores could be deployed.
OpenAI employees felt pressured to rush through safety evaluations for GPT-4 Omni (internally codenamed as Scallion) in spring 2024, with the company planning release celebrations before the preparedness team could determine if the model was safe.
Before the evaluations had meaningfully started, however, Altman had insisted on keeping the schedule: ‘On May 9, we launch Scallion,’ the safety researcher quoted Altman saying. This was not just worrying for Preparedness but for all of OpenAI’s safety procedures, including red teaming and alignment.
—Empire of AI, 5/20/25
View Source
Sources: OpenAI, Empire of AI by Karen Hao, The Washington Post
Former employees alleged that OpenAI illegally barred its workers from warning regulators about safety risks.
2024

July 13, 2024
•
The Washington Post
OpenAI illegally barred staff from airing safety risks, whistleblowers say
In a letter exclusively obtained by The Washington Post, whistleblowers asked the SEC to probe company’s allegedly restrictive nondisclosure agreements.
Pranshu Verma, Cat Zakrzewski, and Nitasha Tiku
Read Full Article
A group of whistleblowers from OpenAI told the SEC that OpenAI’s employment, severance, and nondisclosure practices illegally prohibited employees from warning regulators about the technology. According to them, OpenAI made staff sign agreements requiring that they waive their federal right to whistleblower compensation, as well as prohibiting them from disclosing information to federal authorities without prior permission.
OpenAI whistleblowers have filed a complaint with the Securities and Exchange Commission alleging the artificial intelligence company illegally prohibited its employees from warning regulators about the grave risks its technology may pose to humanity, calling for an investigation. The whistleblowers said OpenAI issued its employees overly restrictive employment, severance and nondisclosure agreements that could have led to penalties against workers who raised concerns about OpenAI to federal regulators, according to a seven-page letter sent to the SEC commissioner earlier this month that referred to the formal complaint.
—The Washington Post, 7/13/24
View Source
Source: The Washington Post
Former employees alleged that OpenAI illegally barred its workers from warning regulators about safety risks.
2024

July 13, 2024
The Washington Post
OpenAI illegally barred staff from airing safety risks, whistleblowers say
In a letter exclusively obtained by The Washington Post, whistleblowers asked the SEC to probe company’s allegedly restrictive nondisclosure agreements.
Pranshu Verma, Cat Zakrzewski, and Nitasha Tiku
Read Full Article
A group of whistleblowers from OpenAI told the SEC that OpenAI’s employment, severance, and nondisclosure practices illegally prohibited employees from warning regulators about the technology. According to them, OpenAI made staff sign agreements requiring that they waive their federal right to whistleblower compensation, as well as prohibiting them from disclosing information to federal authorities without prior permission.
OpenAI whistleblowers have filed a complaint with the Securities and Exchange Commission alleging the artificial intelligence company illegally prohibited its employees from warning regulators about the grave risks its technology may pose to humanity, calling for an investigation. The whistleblowers said OpenAI issued its employees overly restrictive employment, severance and nondisclosure agreements that could have led to penalties against workers who raised concerns about OpenAI to federal regulators, according to a seven-page letter sent to the SEC commissioner earlier this month that referred to the formal complaint.
Source: The Washington Post
Former employees alleged that OpenAI illegally barred its workers from warning regulators about safety risks.
2024

July 13, 2024
•
The Washington Post
OpenAI illegally barred staff from airing safety risks, whistleblowers say
In a letter exclusively obtained by The Washington Post, whistleblowers asked the SEC to probe company’s allegedly restrictive nondisclosure agreements.
Pranshu Verma, Cat Zakrzewski, and Nitasha Tiku
Read Full Article
A group of whistleblowers from OpenAI told the SEC that OpenAI’s employment, severance, and nondisclosure practices illegally prohibited employees from warning regulators about the technology. According to them, OpenAI made staff sign agreements requiring that they waive their federal right to whistleblower compensation, as well as prohibiting them from disclosing information to federal authorities without prior permission.
OpenAI whistleblowers have filed a complaint with the Securities and Exchange Commission alleging the artificial intelligence company illegally prohibited its employees from warning regulators about the grave risks its technology may pose to humanity, calling for an investigation. The whistleblowers said OpenAI issued its employees overly restrictive employment, severance and nondisclosure agreements that could have led to penalties against workers who raised concerns about OpenAI to federal regulators, according to a seven-page letter sent to the SEC commissioner earlier this month that referred to the formal complaint.
—The Washington Post, 7/13/24
View Source
Source: The Washington Post
OpenAI suffered a cybersecurity incident and failed to report it for over a year.
2023 – 2024
In 2023, OpenAI promised the White House that they would share information relevant to AI security with governments and publicly report security risks. They reaffirmed this commitment at the AI Safety Summit in Seoul, South Korea in 2024.
But in 2023, a hacker gained access to OpenAI internal messages and stole details about their AI technology. The company did not inform authorities about the breach, and the story did not come out for over a year.
For some OpenAI employees, the news raised fears that foreign adversaries such as China could steal A.I. technology that — while now mostly a work and research tool — could eventually endanger U.S. national security. It also led to questions about how seriously OpenAI was treating security, and exposed fractures inside the company about the risks of artificial intelligence.
—The New York Times, 7/4/24
View Source
OpenAI suffered a cybersecurity incident and failed to report it for over a year.
2023 – 2024
In 2023, OpenAI promised the White House that they would share information relevant to AI security with governments and publicly report security risks. They reaffirmed this commitment at the AI Safety Summit in Seoul, South Korea in 2024.
But in 2023, a hacker gained access to OpenAI internal messages and stole details about their AI technology. The company did not inform authorities about the breach, and the story did not come out for over a year.
For some OpenAI employees, the news raised fears that foreign adversaries such as China could steal A.I. technology that — while now mostly a work and research tool — could eventually endanger U.S. national security. It also led to questions about how seriously OpenAI was treating security, and exposed fractures inside the company about the risks of artificial intelligence.
OpenAI suffered a cybersecurity incident and failed to report it for over a year.
2023 – 2024
In 2023, OpenAI promised the White House that they would share information relevant to AI security with governments and publicly report security risks. They reaffirmed this commitment at the AI Safety Summit in Seoul, South Korea in 2024.
But in 2023, a hacker gained access to OpenAI internal messages and stole details about their AI technology. The company did not inform authorities about the breach, and the story did not come out for over a year.
For some OpenAI employees, the news raised fears that foreign adversaries such as China could steal A.I. technology that — while now mostly a work and research tool — could eventually endanger U.S. national security. It also led to questions about how seriously OpenAI was treating security, and exposed fractures inside the company about the risks of artificial intelligence.
—The New York Times, 7/4/24
View Source
OpenAI allegedly fired an employee after they shared security concerns with the board.
2024
OpenAI technical program manager Leopold Aschenbrenner shared a memo with OpenAI’s board of directors raising security concerns, specifically fears that foreign adversaries could easily steal the company’s secrets and threaten the United States’ national security.
Upon being fired, OpenAI allegedly made clear to Aschenbrenner that sharing his concerns with the board was the reason he was being let go instead of merely reprimanded.
A few weeks later, a major security incident occurred. That prompted me to share the memo with a couple of board members. Days later, it was made very clear to me that leadership was very unhappy I had shared this memo with the board. Apparently, the board hassled leadership about security … The reason I bring this up is that when I was fired, it was very made explicit that the security memo was a major reason for my being fired. They said, "the reason this is a firing and not a warning is because of the security memo."
—Leopold Aschenbrenner on Dwarkesh Podcast, 6/4/24
View Source
Sources: Transformer, Dwarkesh Podcast, The New York Times
OpenAI allegedly fired an employee after they shared security concerns with the board.
2024
OpenAI technical program manager Leopold Aschenbrenner shared a memo with OpenAI’s board of directors raising security concerns, specifically fears that foreign adversaries could easily steal the company’s secrets and threaten the United States’ national security.
Upon being fired, OpenAI allegedly made clear to Aschenbrenner that sharing his concerns with the board was the reason he was being let go instead of merely reprimanded.
A few weeks later, a major security incident occurred. That prompted me to share the memo with a couple of board members. Days later, it was made very clear to me that leadership was very unhappy I had shared this memo with the board. Apparently, the board hassled leadership about security … The reason I bring this up is that when I was fired, it was very made explicit that the security memo was a major reason for my being fired. They said, "the reason this is a firing and not a warning is because of the security memo."
Sources: Transformer, Dwarkesh Podcast, The New York Times
OpenAI allegedly fired an employee after they shared security concerns with the board.
2024
OpenAI technical program manager Leopold Aschenbrenner shared a memo with OpenAI’s board of directors raising security concerns, specifically fears that foreign adversaries could easily steal the company’s secrets and threaten the United States’ national security.
Upon being fired, OpenAI allegedly made clear to Aschenbrenner that sharing his concerns with the board was the reason he was being let go instead of merely reprimanded.
A few weeks later, a major security incident occurred. That prompted me to share the memo with a couple of board members. Days later, it was made very clear to me that leadership was very unhappy I had shared this memo with the board. Apparently, the board hassled leadership about security … The reason I bring this up is that when I was fired, it was very made explicit that the security memo was a major reason for my being fired. They said, "the reason this is a firing and not a warning is because of the security memo."
—Leopold Aschenbrenner on Dwarkesh Podcast, 6/4/24
View Source
Sources: Transformer, Dwarkesh Podcast, The New York Times
OpenAI’s GPT-4 was publicly tested in India without the required approval from its Development Safety Board.
2022
A joint Deployment Safety Board (DSB) between OpenAI and Microsoft was set up to review new models for risks before they were released. However, GPT-4 was tested publicly in India prior to DSB approval.
In some instances, Microsoft fell afoul of the DSB, but the OpenAI board was alarmed when they were informed about such a setback from an employee—who stopped a board member in the hallway and asked if the board knew about the safety breach—rather than from Altman, despite having just completed a six-hour board meeting. In late 2022, Microsoft had rolled out a version of still-unreleased GPT-4 in a test in India without getting DSB approval first. While it ultimately got it, the breach in India suggested to some board members that the companies’ safety processes were not working.
—The Optimist, 5/20/25
View Source
OpenAI’s GPT-4 was publicly tested in India without the required approval from its Development Safety Board.
2022
A joint Deployment Safety Board (DSB) between OpenAI and Microsoft was set up to review new models for risks before they were released. However, GPT-4 was tested publicly in India prior to DSB approval.
In some instances, Microsoft fell afoul of the DSB, but the OpenAI board was alarmed when they were informed about such a setback from an employee—who stopped a board member in the hallway and asked if the board knew about the safety breach—rather than from Altman, despite having just completed a six-hour board meeting. In late 2022, Microsoft had rolled out a version of still-unreleased GPT-4 in a test in India without getting DSB approval first. While it ultimately got it, the breach in India suggested to some board members that the companies’ safety processes were not working.
OpenAI’s GPT-4 was publicly tested in India without the required approval from its Development Safety Board.
2022
A joint Deployment Safety Board (DSB) between OpenAI and Microsoft was set up to review new models for risks before they were released. However, GPT-4 was tested publicly in India prior to DSB approval.
In some instances, Microsoft fell afoul of the DSB, but the OpenAI board was alarmed when they were informed about such a setback from an employee—who stopped a board member in the hallway and asked if the board knew about the safety breach—rather than from Altman, despite having just completed a six-hour board meeting. In late 2022, Microsoft had rolled out a version of still-unreleased GPT-4 in a test in India without getting DSB approval first. While it ultimately got it, the breach in India suggested to some board members that the companies’ safety processes were not working.
—The Optimist, 5/20/25
View Source
OpenAI significantly cut the time and resources it dedicated to safety testing.
2023 – 2025

April 11, 2025
•
Financial Times
OpenAI slashes AI model safety testing time
Testers have raised concerns that its technology is being rushed out without sufficient safeguards
Cristina Criddle
Read Full Article
While Sam Altman and OpenAI claimed that the company would increase caution as their AI systems approach AGI, OpenAI has instead slashed the time and resources it dedicated to safety testing.
As our systems get closer to AGI, we are becoming increasingly cautious with the creation and deployment of our models.
—Sam Altman, 2/24/23
View Source
OpenAI has slashed the time and resources it spends on testing the safety of its powerful artificial intelligence models, raising concerns that its technology is being rushed out without sufficient safeguards. Staff and third-party groups have recently been given just days to conduct “evaluations”, the term given to tests for assessing models’ risks and performance, on OpenAI’s latest large language models, compared to several months previously. According to eight people familiar with OpenAI’s testing processes, the start-up’s tests have become less thorough, with insufficient time and resources dedicated to identifying and mitigating risks, as the $300bn start-up comes under pressure to release new models quickly and retain its competitive edge.
—Financial Times, 4/10/25
View Source
Sources: OpenAI, Financial Times
OpenAI significantly cut the time and resources it dedicated to safety testing.
2023 – 2025

April 11, 2025
Financial Times
OpenAI slashes AI model safety testing time
Testers have raised concerns that its technology is being rushed out without sufficient safeguards
Cristina Criddle
Read Full Article
While Sam Altman and OpenAI claimed that the company would increase caution as their AI systems approach AGI, OpenAI has instead slashed the time and resources it dedicated to safety testing.
OpenAI has slashed the time and resources it spends on testing the safety of its powerful artificial intelligence models, raising concerns that its technology is being rushed out without sufficient safeguards. Staff and third-party groups have recently been given just days to conduct “evaluations”, the term given to tests for assessing models’ risks and performance, on OpenAI’s latest large language models, compared to several months previously. According to eight people familiar with OpenAI’s testing processes, the start-up’s tests have become less thorough, with insufficient time and resources dedicated to identifying and mitigating risks, as the $300bn start-up comes under pressure to release new models quickly and retain its competitive edge.
Sources: OpenAI, Financial Times
OpenAI significantly cut the time and resources it dedicated to safety testing.
2023 – 2025

April 11, 2025
•
Financial Times
OpenAI slashes AI model safety testing time
Testers have raised concerns that its technology is being rushed out without sufficient safeguards
Cristina Criddle
Read Full Article
While Sam Altman and OpenAI claimed that the company would increase caution as their AI systems approach AGI, OpenAI has instead slashed the time and resources it dedicated to safety testing.
As our systems get closer to AGI, we are becoming increasingly cautious with the creation and deployment of our models.
—Sam Altman, 2/24/23
View Source
OpenAI has slashed the time and resources it spends on testing the safety of its powerful artificial intelligence models, raising concerns that its technology is being rushed out without sufficient safeguards. Staff and third-party groups have recently been given just days to conduct “evaluations”, the term given to tests for assessing models’ risks and performance, on OpenAI’s latest large language models, compared to several months previously. According to eight people familiar with OpenAI’s testing processes, the start-up’s tests have become less thorough, with insufficient time and resources dedicated to identifying and mitigating risks, as the $300bn start-up comes under pressure to release new models quickly and retain its competitive edge.
—Financial Times, 4/10/25
View Source
Sources: OpenAI, Financial Times
OpenAI waited to release a promised safety evaluation until three months after the model had already been publicly released.
2024
OpenAI committed itself to releasing preparedness findings alongside each of its frontier model releases.
OpenAI released its preparedness scorecard risk assessment for GPT-4o, the first model it released after adopting its preparedness framework, three months after the model’s public release.
We’ll continue to publish our Preparedness findings with each frontier model release, just as we’ve done for GPT‑4o...
—OpenAI, 4/15/25
View Source
OpenAI waited to release a promised safety evaluation until three months after the model had already been publicly released.
2024
OpenAI committed itself to releasing preparedness findings alongside each of its frontier model releases.
OpenAI released its preparedness scorecard risk assessment for GPT-4o, the first model it released after adopting its preparedness framework, three months after the model’s public release.
OpenAI waited to release a promised safety evaluation until three months after the model had already been publicly released.
2024
OpenAI committed itself to releasing preparedness findings alongside each of its frontier model releases.
OpenAI released its preparedness scorecard risk assessment for GPT-4o, the first model it released after adopting its preparedness framework, three months after the model’s public release.
We’ll continue to publish our Preparedness findings with each frontier model release, just as we’ve done for GPT‑4o...
—OpenAI, 4/15/25
View Source
OpenAI did not immediately release a safety scorecard for GPT-4o, GPT-4.1, o1 pro, or Deep Research, despite promises to do so.
2024 - 2025
OpenAI committed itself to releasing preparedness findings for each of its frontier model releases upon their release, in order to ensure that the models were safe to deploy.
OpenAI released its preparedness scorecard risk assessment for GPT-4o, the first model it released after adopting its preparedness framework, three months after the model’s public release.
OpenAI bragged that GPT‑4.1 outperformed GPT-4o “across the board,” a model that did eventually get a safety scorecard. However, the company has still not released a scorecard for GPT-4.1, months later.
When o1 Pro was released, OpenAI did not release a safety scorecard evaluating its performance.
When OpenAI's Deep Research was released, powered by the full o3 model, OpenAI did not release a safety scorecard for the o3 model or the Deep Research product.
OpenAI also failed to release a system card for Deep Research when it was first made available ... But this is the most significant model release I can think of that was released without any safety information. This should be a wake-up call for policymakers. Companies are reneging on promises they’ve made to governments, and may not be carrying out basic safety testing procedures on whether their models are increasing national-security relevant risks.
—Transformer, 3/28/25
View Source
As a part of our Preparedness Framework, we will maintain a dynamic (i.e., frequently updated) Scorecard that is designed to track our current pre-mitigation model risk across each of the risk categories, as well as the post-mitigation risk. The Scorecard will be regularly updated by the Preparedness team to help ensure it reflects the latest research and findings.
—OpenAI, 12/18/23
View Source
We’ll continue to publish our Preparedness findings with each frontier model release, just as we’ve done for GPT‑4o, OpenAI o1, Operator, o3‑mini, deep research, and GPT‑4.5...
—OpenAI, 4/15/25
View Source
OpenAI did not immediately release a safety scorecard for GPT-4o, GPT-4.1, o1 pro, or Deep Research, despite promises to do so.
2024 - 2025
OpenAI committed itself to releasing preparedness findings for each of its frontier model releases upon their release, in order to ensure that the models were safe to deploy.
OpenAI released its preparedness scorecard risk assessment for GPT-4o, the first model it released after adopting its preparedness framework, three months after the model’s public release.
OpenAI bragged that GPT‑4.1 outperformed GPT-4o “across the board,” a model that did eventually get a safety scorecard. However, the company has still not released a scorecard for GPT-4.1, months later.
When o1 Pro was released, OpenAI did not release a safety scorecard evaluating its performance.
When OpenAI's Deep Research was released, powered by the full o3 model, OpenAI did not release a safety scorecard for the o3 model or the Deep Research product.
OpenAI also failed to release a system card for Deep Research when it was first made available ... But this is the most significant model release I can think of that was released without any safety information. This should be a wake-up call for policymakers. Companies are reneging on promises they’ve made to governments, and may not be carrying out basic safety testing procedures on whether their models are increasing national-security relevant risks.
As a part of our Preparedness Framework, we will maintain a dynamic (i.e., frequently updated) Scorecard that is designed to track our current pre-mitigation model risk across each of the risk categories, as well as the post-mitigation risk. The Scorecard will be regularly updated by the Preparedness team to help ensure it reflects the latest research and findings.
OpenAI did not immediately release a safety scorecard for GPT-4o, GPT-4.1, o1 pro, or Deep Research, despite promises to do so.
2024 - 2025
OpenAI committed itself to releasing preparedness findings for each of its frontier model releases upon their release, in order to ensure that the models were safe to deploy.
OpenAI released its preparedness scorecard risk assessment for GPT-4o, the first model it released after adopting its preparedness framework, three months after the model’s public release.
OpenAI bragged that GPT‑4.1 outperformed GPT-4o “across the board,” a model that did eventually get a safety scorecard. However, the company has still not released a scorecard for GPT-4.1, months later.
When o1 Pro was released, OpenAI did not release a safety scorecard evaluating its performance.
When OpenAI's Deep Research was released, powered by the full o3 model, OpenAI did not release a safety scorecard for the o3 model or the Deep Research product.
OpenAI also failed to release a system card for Deep Research when it was first made available ... But this is the most significant model release I can think of that was released without any safety information. This should be a wake-up call for policymakers. Companies are reneging on promises they’ve made to governments, and may not be carrying out basic safety testing procedures on whether their models are increasing national-security relevant risks.
—Transformer, 3/28/25
View Source
As a part of our Preparedness Framework, we will maintain a dynamic (i.e., frequently updated) Scorecard that is designed to track our current pre-mitigation model risk across each of the risk categories, as well as the post-mitigation risk. The Scorecard will be regularly updated by the Preparedness team to help ensure it reflects the latest research and findings.
—OpenAI, 12/18/23
View Source
We’ll continue to publish our Preparedness findings with each frontier model release, just as we’ve done for GPT‑4o, OpenAI o1, Operator, o3‑mini, deep research, and GPT‑4.5...
—OpenAI, 4/15/25
View Source
OpenAI’s safety evaluations for its o1 model were based on an older version, not the more-capable model publicly available on release.
2025
OpenAI’s preparedness framework says that they will measure the capabilities of models which pose risks of severe harm and that they will refrain from deploying them until their evaluations show that the risks have been mitigated. But they did not test the full version of the o1 model that they eventually released, opting to test a less-capable early checkpoint for the model and to initially not disclose this.
But even with the nonprofit maintaining control and majority ownership, OpenAI is speedily working to commercialize products as competition heats up in generative AI. And it may have rushed the rollout of its o1 reasoning model last year, according to some portions of its model card. Results of the model’s “preparedness evaluations,” the tests OpenAI runs to assess an AI model’s dangerous capabilities and other risks, were based on earlier versions of o1. They had not been run on the final version of the model, according to its model card, which is publicly available.
—CNBC, 5/14/25
View Source
Source: CNBC
OpenAI’s safety evaluations for its o1 model were based on an older version, not the more-capable model publicly available on release.
2025
OpenAI’s preparedness framework says that they will measure the capabilities of models which pose risks of severe harm and that they will refrain from deploying them until their evaluations show that the risks have been mitigated. But they did not test the full version of the o1 model that they eventually released, opting to test a less-capable early checkpoint for the model and to initially not disclose this.
But even with the nonprofit maintaining control and majority ownership, OpenAI is speedily working to commercialize products as competition heats up in generative AI. And it may have rushed the rollout of its o1 reasoning model last year, according to some portions of its model card. Results of the model’s “preparedness evaluations,” the tests OpenAI runs to assess an AI model’s dangerous capabilities and other risks, were based on earlier versions of o1. They had not been run on the final version of the model, according to its model card, which is publicly available.
Source: CNBC
OpenAI’s safety evaluations for its o1 model were based on an older version, not the more-capable model publicly available on release.
2025
OpenAI’s preparedness framework says that they will measure the capabilities of models which pose risks of severe harm and that they will refrain from deploying them until their evaluations show that the risks have been mitigated. But they did not test the full version of the o1 model that they eventually released, opting to test a less-capable early checkpoint for the model and to initially not disclose this.
But even with the nonprofit maintaining control and majority ownership, OpenAI is speedily working to commercialize products as competition heats up in generative AI. And it may have rushed the rollout of its o1 reasoning model last year, according to some portions of its model card. Results of the model’s “preparedness evaluations,” the tests OpenAI runs to assess an AI model’s dangerous capabilities and other risks, were based on earlier versions of o1. They had not been run on the final version of the model, according to its model card, which is publicly available.
—CNBC, 5/14/25
View Source
Source: CNBC
OpenAI employees have accused OpenAI of a culture of recklessness and secrecy.
2024

June 4, 2024
•
The New York Times
OpenAI Insiders Warn of a ‘Reckless’ Race for Dominance
A group of current and former employees is calling for sweeping changes to the artificial intelligence industry, including greater transparency and protections for whistle-blowers.
Kevin Roose
Read Full Article

June 4, 2024
The New York Times
OpenAI Insiders Warn of a ‘Reckless’ Race for Dominance
A group of current and former employees is calling for sweeping changes to the artificial intelligence industry, including greater transparency and protections for whistle-blowers.
Kevin Roose
Read Full Article

June 4, 2024
•
The New York Times
OpenAI Insiders Warn of a ‘Reckless’ Race for Dominance
A group of current and former employees is calling for sweeping changes to the artificial intelligence industry, including greater transparency and protections for whistle-blowers.
Kevin Roose
Read Full Article
A group of OpenAI insiders is blowing the whistle on what they say is a culture of recklessness and secrecy at the San Francisco artificial intelligence company, which is racing to build the most powerful A.I. systems ever created. The group, which includes nine current and former OpenAI employees, has rallied in recent days around shared concerns that the company has not done enough to prevent its A.I. systems from becoming dangerous. The members say OpenAI, which started as a nonprofit research lab and burst into public view with the 2022 release of ChatGPT, is putting a priority on profits and growth as it tries to build artificial general intelligence, or A.G.I., the industry term for a computer program capable of doing anything a human can.
—The New York Times, 6/4/24
View Source
A group of OpenAI insiders is blowing the whistle on what they say is a culture of recklessness and secrecy at the San Francisco artificial intelligence company, which is racing to build the most powerful A.I. systems ever created. The group, which includes nine current and former OpenAI employees, has rallied in recent days around shared concerns that the company has not done enough to prevent its A.I. systems from becoming dangerous. The members say OpenAI, which started as a nonprofit research lab and burst into public view with the 2022 release of ChatGPT, is putting a priority on profits and growth as it tries to build artificial general intelligence, or A.G.I., the industry term for a computer program capable of doing anything a human can.
—The New York Times, 6/5/24
View Source
A group of OpenAI insiders is blowing the whistle on what they say is a culture of recklessness and secrecy at the San Francisco artificial intelligence company, which is racing to build the most powerful A.I. systems ever created. The group, which includes nine current and former OpenAI employees, has rallied in recent days around shared concerns that the company has not done enough to prevent its A.I. systems from becoming dangerous. The members say OpenAI, which started as a nonprofit research lab and burst into public view with the 2022 release of ChatGPT, is putting a priority on profits and growth as it tries to build artificial general intelligence, or A.G.I., the industry term for a computer program capable of doing anything a human can.
—The New York Times, 6/5/24
View Source
Source: The New York Times
Legal
The OpenAI Files is the most comprehensive collection to date of publicly documented concerns with governance practices, leadership integrity, and organizational culture at OpenAI.
© 2025 The Midas Project & The Tech Oversight Project. All rights reserved.
The OpenAI Files was created with complete editorial independence. We have received no funding, editorial direction, assistance, or support of any kind from Elon Musk, xAI, Anthropic, Meta, Google, Microsoft, or any other OpenAI competitor. This report is guided solely by our commitment to corporate accountability and public interest research.
Legal
The OpenAI Files is the most comprehensive collection to date of publicly documented concerns with governance practices, leadership integrity, and organizational culture at OpenAI.
© 2025 The Midas Project & The Tech Oversight Project. All rights reserved.
The OpenAI Files was created with complete editorial independence. We have received no funding, editorial direction, assistance, or support of any kind from Elon Musk, xAI, Anthropic, Meta, Google, Microsoft, or any other OpenAI competitor. This report is guided solely by our commitment to corporate accountability and public interest research.
© 2025 The Midas Project & The Tech Oversight Project.
All rights reserved.
The OpenAI Files was created with complete editorial independence. We have received no funding, editorial direction, assistance, or support of any kind from Elon Musk, xAI, Anthropic, Meta, Google, Microsoft, or any other OpenAI competitor. This report is guided solely by our commitment to corporate accountability and public interest research.
The OpenAI Files is the most comprehensive collection to date of publicly documented concerns with governance practices, leadership integrity, and organizational culture at OpenAI.