Sam Altman, president of Y Combinator
Patrick T. Fallon | Bloomberg | Bloomberg | Getty Images
OpenAI CEO Sam Altman said he agreed with parts of an open letter from the Future of Life Institute signed by tech leaders like Tesla CEO Elon Musk and Apple co-founder Steve Wozniak that called for a six-month AI research halt, but added that the letter was “missing most technical nuance about where we need the pause.”
Altman made the remarks during a Thursday video appearance at an MIT event that discussed business and AI.
OpenAI makes ChatGPT, an AI bot that can create human-like responses to questions asked by a user. The bot kicked off an AI frenzy in the technology world. Microsoft uses OpenAI’s technology in its Bing chatbot and Google recently launched its competitor Bard.
“I think moving with caution and an increasing rigor for safety issues is really important,” Altman continued. “The letter I don’t think was the optimal way to address it.”
In March, Musk, Wozniak, and dozens of other academics called for an immediate pause to training “experiments” connected to large language models that were “more powerful than GPT-4,” OpenAI’s flagship large language model, or LLM. Over 25,000 people have signed the letter since then.
OpenAI’s GPT technology garnered international attention when ChatGPT launched in 2022. GPT technology underpins Microsoft‘s Bing AI chatbot, and prompted a flurry of AI investment.
“AI labs and independent experts should use this pause to jointly develop and implement a set of shared safety protocols for advanced AI design and development that are rigorously audited and overseen by independent outside experts,” the letter said.
“I also agree as capabilities get more and more serious, that the safety bar has got to increase,” Altman said at the MIT event.
Earlier this year, Altman acknowledged that AI technology made him a “little bit scared.” Questions about safe and ethical AI use have come up at the White House, on Capitol Hill, and in boardrooms across America.
“We are doing other things on top of GPT-4 that I think have all sorts of safety issues that are important to address and were totally left out of the letter,” the OpenAI executive said.