Despite plenty of competition, ChatGPT remains as popular as ever. In fact, its 4o image generation tool has led to a new surge in the number of users. However odd this may sound, the wide adoption is a cause of concern for OpenAI, and CEO Sam Altman has warned of the adverse consequences ahead.
Future OpenAI product launches may be postponed, Altman states on X. There is not enough capacity to meet the enormous demand for AI outputs, with AI-driven images being the latest hobby horse among online users. As of this week, those interested no longer need to pay OpenAI a dime to make ChatGPT’s servers collapse with AI image requests.
A 40 billion dollar investment round led by SoftBank should alleviate any short-term troubles at least. It will give OpenAI room to maneuver and help fund the Stargate Project to install AI infrastructure throughout the US. Nevertheless, AI models are becoming a commodity, meaning OpenAI will need to keep making gargantuan investments.
Under control, but…
According to Altman, the current situation is under control, but it is already having an impact on the capacity for other AI initiatives. OpenAI has to divide its own capacity among AI inferencing (where users actually use ChatGPT and the OpenAI API), experiments, and AI training rounds. The latter is the cornerstone for building new LLMs, such as o3 and GPT-4.5. Since the beginning of 2023, we have been waiting for a model to be released that will be called GPT-5, but that release still seems a long way off. It appears the weight of expectation for such a model is too much to bear for now.
Unlike Google, OpenAI does not (yet) have any AI chips of its own, but the company is planning to develop them. For the time being, that coming to fruition is also a long way off, which means the company is dependent on external hardware, usually in the form of Nvidia GPUs. Due to a deteriorating relationship with Microsoft, it is quite possible that the advantageous Azure capacity it has enjoyed, has decreased, or has already been reduced.
Why is this a problem?
ChatGPT currently has 500 million weekly users and 20 million paying subscribers. This is a significant increase from the end of 2024, when the platform had 300 million users and 15.5 million subscribers. This explosive growth puts enormous pressure on the company’s infrastructure, and the monetary gains aren’t enough to make up the difference in costs. For the most expensive models, with o1-pro up top, even subscriptions of 200 dollars a month are not enough to cover the costs for OpenAI due to the multitude of queries.
The problem is that OpenAI will fall between two stools if it does not innovate quickly. New models do not yet seem to be the same intuitive leap forward that GPT-4 was in 2023 compared to GPT-3.5, which ChatGPT appeared on the scene with in November 2022. This lack of progress is not so worrying for rivals such as Anthropic, Google, DeepSeek or Mistral, as they can handle their own capacity pressures or they are only sporadic. For OpenAI, it is vital that the moat around state-of-the-art GenAI remains intact. If not, the soaring costs for AI inferencing, AI hardware and AI R&D will be insurmountable.
Attempt at differentiation
OpenAI is trying to distinguish itself from the competition by launching new products and investing in hardware development. These developments are going in all kinds of directions. For example, the company recently hired Caitlin Kalinowski, the former manager of the AR glasses development team at Meta, to develop hardware solutions.
In addition, OpenAI CEO Sam Altman is involved in initiatives to reduce dependence on Nvidia chips. For example, he is said to have had plans for some time to develop his own AI chips with support from the Middle East.