2 min Applications

Microsoft’s MAI-1 LLM is meant to compete with Gemini and its own OpenAI

Microsoft’s MAI-1 LLM is meant to compete with Gemini and its own OpenAI

Microsoft is currently developing a new LLM called MAI-1 to compete with major LLMs from other providers. These include models from Google, Meta and Anthropic, as well as OpenAI in which Microsoft holds a large stake.

The new large MAI-1 LLM that the tech giant is internally developing, may build on knowledge brought by employees who transferred to Microsoft earlier this year from AI startup Inflection AI. Former Inflection CEO and Google DeepMind co-founder Mustafa Suleyman is leading the development effort, sources tell The Information.

500 billion parameters

Microsoft’s MAI-1 LLM, which is currently under development, is substantially larger than the recently developed Phi family of smaller but powerful AI models. The new LLM is estimated to have about 500 billion parameters, while Phi mini includes 3.8 billion parameters. In comparison, OpenAI’s GPT-4, with 1 trillion parameters, is of course much larger still.

The new LLM is supposedly not a mere second-hand rebrand from Inflection’s former inventory, but sources say it may use the same training datasets and parts of the technology.

More competition among LLMs

MAI-1 clearly has to compete with LLMs from other providers such as Gemini (Google), Llama 3 (Meta), Claude 3 (Anthropic) and most importantly, GPT-4 by OpenAI itself, in which Microsoft has a significant share.

Training naturally requires more computing power because the new LLM is much larger. Therefore, the cost of MAI-1 will be higher than the Phi-LLMs that Microsoft previously marketed. The latter models are specifically targeted for use by smaller companies for AI applications and solutions.

Also read: Microsoft introduces tiny AI model Phi-3 Mini