Dubai, United Arab Emirates — The CEO of ChatGPT maker OpenAI, Sam Altman, recently highlighted the potential dangers that keep him awake at night in regard to artificial intelligence (AI). According to Altman, it is not the prospect of killer robots roaming the streets that concerns him the most, but rather the subtle societal misalignments that could have catastrophic consequences.
Speaking at the World Governments Summit in Dubai via video call, Altman called for the creation of a regulatory body, similar to the International Atomic Energy Agency, to oversee AI advancements. He emphasized that AI is progressing at a faster pace than society expects, and without proper regulation, unforeseen harm may result.
Altman acknowledged the importance of ongoing discussions and debates surrounding AI regulation. Every corner of the world is actively participating in conferences, policy papers, and idea-sharing. However, Altman believes that these discussions must culminate in a concrete action plan that garners widespread support in the coming years.
OpenAI, a leading artificial intelligence startup based in San Francisco, has attracted significant investment from Microsoft. The Associated Press has also partnered with OpenAI to access its news archive. However, OpenAI and Microsoft are currently facing legal action from the New York Times for allegedly using their stories without permission to train OpenAI’s chatbots.
The future of AI holds immense promise and potential. Yet, it is crucial to strike a delicate balance between innovation and responsibility. By proactively implementing effective regulations, aligning societal interests, and fostering global cooperation, we can shape a future where AI benefits humanity without unintended adverse consequences.
OpenAI’s Role in Commercializing Generative AI
OpenAI, with its remarkable achievements, has thrust its CEO, Sam Altman, into the spotlight as the public face of generative AI’s swift commercialization. However, concerns have also arisen regarding the potential risks associated with this new technology.
One country that exemplifies these risks is the United Arab Emirates (UAE), an autocratic federation comprising seven hereditary sheikhdoms. The UAE tightly controls speech within its borders, thereby impeding the free flow of accurate information. This restriction directly affects machine-learning systems like ChatGPT, which heavily rely on precise details to provide users with informed responses.
Moreover, the UAE is home to G42, an Abu Dhabi-based company overseen by the nation’s influential national-security adviser. Experts widely regard G42 as the primary developer of Arabic-language artificial-intelligence models globally. However, the company has faced allegations of espionage due to its association with a mobile-phone app suspected of being spyware. Furthermore, there are claims that it secretly obtained genetic material from Americans for the Chinese government.
In response to American concerns, G42 announced its decision to sever ties with Chinese suppliers. However, during a discussion moderated by the UAE’s Minister of State for Artificial Intelligence, Omar al-Olama, Altman did not address any local concerns.
Altman expressed his satisfaction in witnessing a positive shift in schools’ perception of AI technology. Previously, teachers were apprehensive about students using AI to write papers, but now they recognize its importance for the future. However, Altman acknowledged that AI is still in its early stages of development.
He compared the current state of AI technology to the initial stages of cellular phones, which were equipped with black-and-white screens. Altman believes that with time, significant advancements will occur. He confidently predicts that within a few years, AI will demonstrate considerable improvements and become truly remarkable within a decade.