Elon Musk, top AI researchers and industry executives have signed an open letter, calling for a six-month pause in training systems that surpasses the capabilities of OpenAI’s GPT-4, citing their potential dangers to the humanity.
The letter, sent by the non-profit organization Future of Life Institute, has been signed by more than 1,000 people including Elon Musk, researchers from DeepMind, the general director of Stability AI, Emad Mostaque, as well as recognized experts in the field of AI such as Yoshua Bengio and Stuart Russell. The complete list can be seen here.
The signatories express concern about the current state of competition surrounding AI, and emphasize the need for regulatory bodies to guarantee that the AI systems are secure and appropriate for use.
They are calling for a pause in the development of advanced AI until common safety protocols for such models are developed.
The group exposes the potential risks posed to society and civilization by the competitive AI systems – particularly economic and political disruption – and calls on their developers collaborate with policymakers for speeding up the creation of strong AI governance rules.
The document states that “Powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable”.
As “AI systems with human-competitive intelligence can pose profound risks for society and humanity”, the group is calling for developers to collaborate with policymakers and regulators to ensure that strong AI systems are developed only when their positive effects are certain, and their risks are manageable.
This comes as Europol recently warned about the potential misuse of advanced AI like ChatGPT in phishing attempts, disinformation, and cybercrime.
The letter has been welcomed by many experts such as Gary Marcus, professor emeritus at New York University, who believes that slowing down AI development until we better understand the ramifications is necessary.