OpenAI, Microsoft, Google and Anthropic are setting up their own organization to manage the development of machine learning models.
Four major players in AI products are setting their own guidelines for safe and responsible development of machine learning models. OpenAI, Microsoft, Google and Anthropic are launching a forum focused on what they call ‘frontier AI models’. These are models that have the capacity to compromise public safety.
The mission shines through the name of the organization: Frontier Model Forum. In it, the four companies combine their expertise to create tests, benchmarks and best practices. The guidelines will also be conveyed to policymakers, academics and businesses.
But that is still in the future. The four parties announced the formation today and say they focus the coming months on putting together an advisory board and board of directors. In addition, funding has yet to be put in place.
Disguised advocacy for self-regulation
With the Frontier Model Forum, the four companies show their commitment to bringing secure AI products to market. At the same time, it sends the message that external regulation is not necessary, but that players in the AI field are happy to make their own rules of the game.
The Frontier Model Forum, therefore, invites other players in the AI world to join. Suitable organizations develop ‘frontier AI models’ and are committed to the safe use of them.
Tip: Are OpenAI, Microsoft and Google lobbying a way out the AI Act?