3 min Applications

ChatGPT gets smarter, cheaper and less lazy

ChatGPT gets smarter, cheaper and less lazy

OpenAI has refined its offerings and made many of them cheaper. A new LLM for ChatGPT reduces incidents where the chatbot shows “laziness” in its responses. Meanwhile, developers can now make AI models understand text better. Developers now also have greater flexibility with the OpenAI API.

Multiple ChatGPT’s paid services are getting a bit cheaper. For example, deploying GPT-3.5 Turbo is now 50 percent more affordable than before for inputs and 25 percent for outputs. OpenAI points out that 70 percent of enterprise users have already switched from GPT-4 already to GPT-4 Turbo, which is also cheaper uue to efficiency gains.

Better responses again?

A new GPT-4 Turbo preview model comes with a pretty noteworthy improvement. “gpt-4-0125-preview” is said to perform tasks such as code generation more comprehensively, reducing its suggested “laziness.” Users have long reported that ChatGPT seems to be becoming less and less capable. For example, on the OpenAI Developer Forum in November, one user called it “dumber” than before, with more incorrect code and worse-performing conversational memory.

This was already supported by scientific findings earlier last year. Research from Stanford University and UC Berkeley showed that complex math questions were answered less accurately than before. However, the chatbot did seem more capable of resisting abusive and undesirable questions. It appears that ChatGPT has gradually become less harmful but, in addition, less effective. The new model should fix that.

Preparing text for AI gets easier

OpenAI has also introduced two new embedding models. As OpenAI puts it, an embedding is a “a sequence of numbers that represents the concepts within content such as natural language or code.” The new models are a lot more efficient, and thus cheaper.

Embeddings have many advantages. For example, they allow organizations to prepare proprietary text or code for AI deployment while encrypting it. AI models can then learn the relationship between different words and numbers to provide accurate answers. OpenAI also uses this methodology to allow ChatGPT and the Assistants API to retrieve knowledge.

API usage improved

From now on, OpenAI will make it easier to customize API keys. Developers can now designate permissions for these keys via a dedicated page on the OpenAI platform. The company promises that data will never be used for training purposes or to improve its own AI models.

Incidentally, the issue around data protection from OpenAI has become a more difficult proposition since the introduction of the GPT Store. Namely, this solution offers custom GPTs from third parties, which means users’ data can end up with those parties as well.

Also read: Microsoft deviates from OpenAI with new team for small AI models