Is the World Adopting AI Models Too Fast?
That’s what an elite group of tech leaders and computer scientists thought when they signed a letter this week, calling for a six-month pause on AI training systems exceeding GPT-4. Tesla’s Elon Musk, Apple Cofounder Steve Wozniak, and IBM’s Grady Booch all signed the letter this week. Emad Mostaque, CEO of Stability AI, was a surprising signatory to the letter. Stability AI’s generator, known as DALL-E, competes with OpenAI’s ChatGPT.
The group of industry experts feels that the companies behind new advances in AI are deploying models without properly considering broader consequences. There is a need, they say, for oversight of AI developments to make sure new technology will operate within public interests and to make sure things don’t go horribly down a dangerous path. The letter describes, in apocalyptic scenarios, the dangers of AI gone wrong.
“Should we risk the loss of control of our civilization? ”
Musk’s Tesla cars are built with AI technology, but he has always had reservations about going too far with AI.
Sam Altman, CEO of Open AI, admitted recently that he is “a little bit scared” of the potential for AI to dramatically impact society as we know it today. “We’ve got to be careful here,” he said.
Concerns about the effects of AI have been the subject of science fiction books and movies for decades. But the sheer speed of the rollout and the tight grip with which ChatGPT has gotten hold of the world has shocked experts and prompted the call for a pause to the development and the consideration of the regulation of the industry.
As of yet, it remains unclear whether tech companies like Google, Microsoft, and Snapchat, which have rushed to incorporate the newest AI technology, will respond to the call for a pause.