Some of the biggest names in tech are calling for artificial intelligence labs to stop the training of the most powerful AI systems for at least six months, citing “profound risks to society and humanity.”
Elon Musk, Bill Gates and Steve Wozniak are among the dozens of tech leaders, professors and researchers who signed the letter, which was published by the Future of Life Institute, a nonprofit backed by Musk.
The letter comes just two weeks after OpenAI announced GPT-4, an even more powerful version of the technology that underpins the viral AI chatbot tool, ChatGPT. In early tests and a company demo, the technology was shown drafting lawsuits, passing standardized exams and building a working website from a hand-drawn sketch.
The letter, which was also signed by the CEO of OpenAI, said the pause should apply to AI systems “more powerful than GPT-4.” It also said independent experts should use the proposed pause to jointly develop and implement a set of shared protocols for AI tools that are safe “beyond a reasonable doubt.”
“Advanced AI could represent a profound change in the history of life on Earth, and should be planned for and managed with commensurate care and resources,” the letter said. “Unfortunately, this level of planning and management is not happening, even though recent months have seen AI labs locked in an out-of-control race to develop and deploy ever more powerful digital minds that no one – not even their creators – can understand, predict, or reliably control.”
If a pause is not put in place soon, the letter said governments should step in and create a moratorium.
The wave of attention around ChatGPT late last year helped renew an arms race among tech companies to develop and deploy similar AI tools in their products. OpenAI, Microsoft and Google are at the forefront of this trend, but IBM, Amazon, Baidu and Tencent are working on similar technologies. A long list of startups are also developing AI writing assistants and image generators.
Artificial intelligence experts have become increasingly concerned about AI tools’ potential for biased responses, the ability to spread misinformation and the impact on consumer privacy. These tools have also sparked questions around how AI can upend professions, enable students to cheat, and shift our relationship with technology.
Lian Jye Su, an analyst at ABI Research, said the letter shows legitimate concerns among tech leaders over the unregulated usage of AI technologies. But he called parts of the petition “ridiculous,” including the premise of asking for a hiatus in AI development beyond GPT-4. He said this could help some of the people who signed the letter preserve their dominance in the field.
Musk was a founding member of OpenAI in 2015 but left three years later and has since criticized the company. Gates cofounded Microsoft, which has invested billions of dollars in OpenAI.
“Corporate ambitions and desire for dominance often triumph over ethical concerns,” Su said. “I won’t be surprised if these organizations are already testing something more advanced than ChatGPT or [Google’s] Bard as we speak.”
Still, the letter hints at the broader discomfort inside and outside the industry with the rapid pace of advancement in AI. Some governing agencies in China, the EU and Singapore have previously introduced early versions of AI governance frameworks.