Leading artificial intelligence companies on Wednesday unveiled plans to launch an industry-led body to develop safety standards for rapidly advancing technology, outpacing Washington policymakers who are still debating whether the U.S. government needs its own AI regulator.
Google, ChatGPT-maker OpenAI, Microsoft and Anthropic introduced the Frontier Model Forum, which the companies say will advance AI safety research and technical evaluations for the next generation of AI systems, which companies predict will be even more powerful than the large language models that currently power chatbots like Bing and Bard. The Forum will also serve as a hub for companies and governments to share information with each other about AI risks.