The urge for human-competitive AI regulations: does AI pose a risk to humanity?

The release of ChatGPT, which can write messages for users in a matter of seconds, and the increasing adoption of generative AI raise concerns about the potential risks these advanced systems pose to humanity, highlighting the need of AI regulations.

As a result of these concerns, a public statement endorsed by prominent figures such as Elon Musk, top AI experts, and industry leaders, has called for a six-month pause on training systems that surpass the capabilities of OpenAI’s GPT-4.

AI regulations may be necessary for several reasons:

  1. AI has the ability to automate and replace human jobs, which can lead to economic and social disruptions. 
  2. AI systems can be biased or discriminatory, which can result in unfair treatment of individuals or groups. Regulations can help ensure that AI systems are developed and used in a fair and ethical manner.
  3. AI can be used for malicious purposes, such as generating convincing fake news, cyberattacks, phishing, or developing autonomous weapons. Regulations can help limit the use of AI for harmful purposes and ensure that it is used responsibly.

Sam Altman, the CEO of OpenAI believes that the artificial general intelligence (A.G.I.) would bring unprecedented prosperity and wealth to the world, but also shares concerns about the potential harm that could result from them. “Am I doing something good? Or really bad?” asked Altman himself in an interview with The New York Times.

How threat actors abuse AI systems? 

One answer to this question can be found by simply asking ChatGPT:

ChatGPT used for malicious purposes

Check Point Research (CPR) has disclosed how cybercriminals are utilizing ChatGPT to produce harmful tools.

While some hackers rely entirely on AI for development, others use it only to accelerate the generation of malicious code. This involves creating malware, phishing attacks, or ransomware. 

CPR asked ChatGPT to provide assistance in creating a believable phishing email that would impersonate a hosting company (see picture below).

ChatGPT creating a phishing email

During further discussions, CPR indicated that the objective was to persuade the target to download an Excel document.

Worldwide AI regulations 

There are many efforts underway to develop AI regulations at both national and international levels. Below are a few examples: 

  • The US Chamber of Commerce is now urging for regulation of AI technologies and believes that failing to regulate AI may harm the economy and individual rights. They also highlight the security risks posed by China’s rapid entry into the sector.
  • The Government of Canada has released preliminary guidance on the forthcoming Artificial Intelligence and Data Act (“AIDA”).
  • The European Commission has approved two multiannual work programs for the Digital Europe Program, delineating the objectives and specific areas of focus that will be allocated almost €1.3 billion in funding for digital transformation and cybersecurity. (24 March 2023)

Conclusion

The development of AI that can surpass human capabilities in certain areas raises additional concerns, such as the potential for job displacement. Moreover, the creation of AI systems that possess human-like consciousness or sentience also raises ethical considerations.

Therefore, it is important that regulations are developed in a way that addresses these concerns and ensures that the development of AI is aligned with human values and goals.

Learn more:

Other popular posts