4 min Analytics

Capital or security: OpenAI undergoes consequences of choosing capital

Managers leave as security is compromised

Capital or security: OpenAI undergoes consequences of choosing capital

OpenAI is experiencing another high-profile departure. Miles Brundage was involved in the secure development of AGI. The team he led is being disbanded by OpenAI. Why does the AI developer continue to see a steady stream of departures from its top ranks?

The company’s transition to a for-profit structure marks a fundamental shift from OpenAI’s original mission and goals. Investors expect to see money. An expectation that cannot be reconciled with wasting money on long-term research projects.

This has led to restructuring organized by OpenAI, involving the elimination of teams that cannot show immediate results. Following the dissolution of the “Superalignment” team, the “AGI Readiness” team is now facing the same fate.

Disgruntled management

The most significant changes in OpenAI’s team composition are increasingly driven by voluntary departures rather than company decisions. More and more key people at the top of the company are choosing to leave because they recognize that pursuing profits comes with an important caveat. Namely, that the emphasis on speed compromises risk prevention measures.

For example, the team that disbanded earlier focused on mitigating AI risks over the long term. The currently disbanding team was responsible for assessing whether OpenAI and the global community were prepared to manage and control AGI.

Miles Brundage joins the team of departed management concerned about the potential risks of the new organizational structure. He shared the reason for his departure in an extensive blog on Substack. Brundageciting cites diminishing freedom in research and publication. According to him, independent voices are needed in the discussion around AI legislation. These should not be tied to capital interests in the industry. He believes he can better serve this role outside of OpenAI.

According to this farewell message, Brundage is taking a different path than many of the previously departed executives. The expertise once consolidated within OpenAI is now dispersing to competitors like Anthropic. Those who don’t make the move to competitors are then trying to become a challenger to OpenAI themselves. Ilya Sutskever focuses on secure superintelligence (AGI) development with his new company Safe Superintelligence. It recently came to light from Mira Murati that she would like to launch her own AI company and is already organizing initial funding rounds.

Also read: OpenAI exodus: two people replace the previous CTO

No speed, no investors

Can OpenAI reverse course and return to its non-profit roots? According to reports about recent investments, this seems unlikely. Rumors suggest that agreements have been reached between OpenAI and investors about a new organizational structure. An evolution into a profit-making organization for the benefit of social causes must be completed within two years. Otherwise, investors may reclaim their money.

Tip: OpenAI secures record funding with condition: no backing for Anthropic

Meanwhile, development continues at OpenAI. The company reportedly plans to release a new AI model in December, tentatively known by the code name Orion, reports The Verge. According to this timeline, the model would follow three months after the release of o1, previously known as “Project Strawberry.” Unlike o1, which focuses on reasoning capabilities at the expense of speed, Orion would be a direct successor to GPT-4.

‘No one is ready for AGI’

AGI was not achieved with the launch of o1. OpenAI CEO Sam Altman is, however, trying to get the development of superintelligence done as soon as possible. The plan for that now seems to be to combine the GPT-LLM with a reasoning model such as o1. The time frame for when that will happen is unclear.

Brundage’s departure provides some more insight into the timeline of where AGI stands. “Neither OpenAI nor any other frontier lab is ready [for AGI], nor is the world ready for it,” he noted. According to him, OpenAI’s management is aware of this. He emphasizes that AGI readiness depends on multiple factors, including the evolution of security culture and regulatory frameworks.

In short, we should not expect the development of superintelligence to be complete anytime soon. At least, not if it would involve secure AGI.

Also read: OpenAI launches o1, making ChatGPT smarter than ever