Can the U.N. really control the potential threat and power of AI?

September 29, 2023

World leaders tackled the very real issues of war and disaster as they gathered in New York last week for the annual high-level sessions of the UN. However, they also started to really consider a problem that, as of right now, is essentially theoretical: the threat that artificial intelligence poses to humans.

As he started the summit, U.N. Secretary General António Guterres said, “Generative artificial intelligence holds much promise—but it may also lead us across a Rubicon and into more danger than we can control.” Only two world leaders had acknowledged AI, he said, when he took office as U.N. chief in 2017. He stated that artificial intelligence (AI) is currently a topic of both amazement and terror.

Over the past year, AI has gained attention due to its rapid visibility. There are grave concerns that the technology, which is already in use in conflict zones like the Ukraine, would threaten the livelihoods of everyone from automakers to Hollywood writers. Many consider this to be a “Oppenheimer” moment, after Robert Oppenheimer, the American physicist who oversaw the development of the atomic bomb, as Today’s WorldView has remarked.

Following the creation of the bomb, Oppenheimer supported nuclear nonproliferation and collaborated with the newly formed United Nations. Today, more than 50 years later, both business leaders and government representatives are looking to the United Nations, the world’s premier multilateral organization, for direction.

The challenge has been accepted by Guterres. The request for a High-Level Advisory Body on Artificial Intelligence, which would eventually lead to the creation of a U.N. agency devoted to AI, was made most prominently by OpenAI CEO Sam Altman, an American AI researcher who is sometimes compared to Oppenheimer and frequently makes that comparison by himself, at the United Nations last week. The International Atomic Energy Agency (IAEA), according to Altman, might serve as a template for the worldwide coordination of AI governance.

However, the analogy to artificial intelligence might not be comforting for those who have been following the recent global nuclear discussion. The war in Ukraine and the unexpected rise in nuclear tensions it sparked, more than 65 years after the IAEA was founded, have prompted questions about whether a divided and shattered United Nations is still accomplishing its goals.

Plans for AI by the U.N. are still in the early stages, but they should advance quickly. Thousands of applications have already been submitted to the High-Level Advisory Body. In order for the board to prepare its final report and recommendations by September 2024, when Guterres is holding the “Summit of the Future” at the high-level U.N. gathering, it must be formed by October.

There are already indications of division. The IAEA’s history of working and fostering cooperation on issues lends support to the possibility of using its approach for the AI regulation. But some in the U.N. system disagree that the IAEA, with its emphasis on tangible nuclear material, provides the best model for protecting an intangible, digital asset.

Other alternative models have also been put out, such as the Intergovernmental Panel on Climate Change, which places a stronger emphasis on professional judgment. Some others don’t think a new agency is really necessary at all. Time Magazine recently quoted Aki Enkenberg, team leader for innovation and digital cooperation at Finland’s Ministry of Foreign Affairs, as saying it appeared to be a “hasty move” to insist on a new agency when current bodies might function.

The problem faced by AI is made more difficult by the fact that its effects and “possible pathways” by which it can endanger mankind are currently unknown. Even with a shared understanding of the risk, Ian Stewart, executive director of the James Martin Center in Washington and a former official with the U.K. Ministry of Defense, wrote for the Bulletin of the Atomic Scientists in June that “it took decades to build an effective system of control for atomic energy.” The dominance of the corporate sector in AI, he continued, exacerbates governance issues.

Additionally, it is being shaped not by academic scholars but by young digital entrepreneurs, who undoubtedly have far more authority and very different ideals than the U.N. ambassadors seated in Turtle Bay. For instance, is Altman really eager to hand up control of AI to the UN? No, they argue his harshest detractors, he’s just cynical.

The United Nations will still have to cope with the all-too-familiar issue of geopolitical divides in addition to these new “superintelligence” challenges. The Campaign to Stop Killer Robots, an earlier AI-related campaign, was started without the backing of powerful nations like the United States. This time, Russia has made it clear that it would not support the establishment of a new U.N. agency to address artificial intelligence, weakening whatever potential consensus Guterres might have been able to forge.

Share Us