What are the AI safety regulations amid launch of ChatGPT4o?
15 May 2024
/155x0:1646x1013/prod01/channel_3/media/middlesexmu/course-images/ai-and-data-science/Ai-and-Data-Science-masthead.jpg)
Article Written By
Dr Mahdi AiashMDX academic Dr Mahdi Aiash provides insight into how UK, EU, United States and China handle regulation of the technology as the new ‘more human-like’ chatbot is unveiled
Just prior to the highly anticipated Google I/O 2024 event, which serves as the customary platform for significant updates to Google’s prominent products like Search, Maps, YouTube, and Android software, the tech giant is poised to introduce a series of innovative artificial intelligence-driven features. At the same time, on May 13, 2024, OpenAI, the business behind ChatGPT, rolled out ChatGPT4o along with a new user interface, marking the company’s latest attempt to enhance the accessibility and utility of its widely used chatbot.
Certainly, AI-enabled services are now available to the public across various demographics and age groups. The rivalry between tech giants reignites discussions surrounding safety and, indeed, security concerns in the age of AI. Hence, the focus is to spotlight these concerns and provide answers to some relevant questions, particularly those that might be raised by the general public rather than just scientists driving the technology.
What distinguishes ChatGPT4o from its predecessor, and what exactly is ChatGPT4o?
Simply put, ChatGPT4o provides a more human-like interaction; it allows users to speak to it, and has the ability to read images and analyse emotions. It also supports 20 different languages [1]. Performance wise, the new version matches the performance of the previous version ChatGPT4 at Turbo-level on text, reasoning, and coding intelligence, while setting new high watermarks on multilingual, audio, and vision capabilities.
What about safety?
OpenAI asserts that the new version has safety “built-in by design” and has undergone assessment using OpenAI’s “Preparedness Framework.” This framework, developed by OpenAI, monitors, assesses, predicts, and safeguards against potential catastrophic risks.
Has there been regulation of AI-powered technology?
Without a doubt, there exists a fierce global competition in the development of AI technology. The primary contenders include the United States, China, the UK, and the EU. Both the U.S. and China perceive AI as vital for national security and economic advancement. Unfortunately, regulatory bodies and lawmakers are not keeping pace with the rapid advancement of technology. Nonetheless, there have been initiatives to regulate AI:
- The UK and EU: While the UK and EU have adopted different strategies, their overarching goals are similar, aiming to strike a balance between risks to users and the public while fostering investment and innovation in AI.
- In the EU: The European Commission proposed the Artificial Intelligence Act (AI Act) on April 21, 2021, which was subsequently ratified by the European Parliament on March 13, 2024, with the aim of establishing a unified regulatory and legal framework for AI [2].
- UK’s white paper approach: In contrast to the EU’s new AI Act, the UK’s white paper suggests that, for now, there are no immediate plans to introduce specific AI legislation. Instead, the white paper outlines five principles of AI governance, with a focus on safety, security, and fairness [3].
- The United States: At present, the United States lacks comprehensive federal legislation or regulations governing the development of AI or explicitly prohibiting or restricting its usage. Nevertheless, there are federal laws in existence that touch upon AI, albeit with limited scope.
- China: In March of this year, China’s proposed Artificial Intelligence Law prioritizes the advancement of the artificial intelligence sector over safeguarding the interests of AI system users and other individuals. The draft focuses on establishing a legal framework conducive to the swift growth of the AI industry. It emphasizes the responsibilities the state should undertake, including offering policy support such as tax incentives and fostering talent in academic institutions. Nevertheless, according to the draft, providers must comply with China’s existing laws on data and personal information protection. For instance, they are prohibited from utilizing AI to analyse and assess individuals’ behaviours, habits, interests, hobbies, as well as their economic, health, and credit-related information.
Conclusion
The AI technology landscape is presently controlled by major players. In the absence of enforceable regulation, end-users have no choice but to rely on the assurances provided by AI providers. Personally, I notice variations in the approaches to regulating AI technology among these players.
In the United States, the primary influencers are the tech giants of Silicon Valley, driven by innovation that leads to profit. Consequently, there is considerable pressure on state lawmakers to enact regulations to manage technological advancements.
China, as another major player, has seen its AI regulations crafted by university scholars, known as the Draft of Scholars. These guidelines serve as suggestions rather than legally binding legislation. Given the intense competition with the United States, we can only hope these suggestions may be highly regarded. In the UK (and the EU), there appears to be a more cautious approach, with a greater emphasis on regulating technology. While this approach may seem wise, considering the stances of the other two players, there is a significant risk of losing our innovative edge against them and potentially resorting to importing technology. It is not surprising, then, that China a and the US are currently holding the first top-level dialogue on artificial intelligence in Geneva.
Reference:
[1] ChatGPT
[2] AI-Act
[3]: UK-White Paper