麻豆官网首页入口免费

Sections

Commentary

The good, the not-so-good, and the ugly of the UN’s blueprint for AI

Cameron F. Kerry
Cam Kerry
Cameron F. Kerry Ann R. and Andrew H. Tisch Distinguished Visiting Fellow - Governance Studies, Center for Technology Innovation

August 29, 2024


  • A leaked report from the United Nations鈥 High-Level Advisory Body on AI indicates a desire for increasing UN involvement in international AI governance functions.
  • Rapidly expanding networks on AI policy, safety, and development have produced unprecedented levels of international cooperation around AI.
  • Rather than forming a superstructure over these efforts, the UN should focus on promoting AI access and capacity-building while leveraging the agility and flexibility of the emerging networks of global governance initiatives.
United Nations Secretary-General Ant贸nio Guterres attends the second day of the UK Artificial Intelligence (AI) Safety Summit at Bletchley Park, Britain, November 2, 2023.
United Nations Secretary-General Ant贸nio Guterres attends the second day of the UK Artificial Intelligence (AI) Safety Summit at Bletchley Park, Britain, November 2, 2023. Leon Neal/Pool via REUTERS

In the 鈥淎I summer鈥 of recent years, centers of artificial intelligence (AI) policymaking have blossomed around the globe as governments, international organizations, and other groups seek to realize the technology鈥檚 promise while identifying and mitigating its accompanying risks. Since Canada became the to announce a national AI strategy in 2017 and then led G7 of a 鈥渃ommon vision for the future of artificial intelligence鈥 in 2018, at least countries have developed AI strategies, almost every multilateral organization also has adopted a policy statement on AI, and the Council of Europe some 450 AI governance initiatives from a wide variety of stakeholders. This worldwide flurry reflects how much generative AI models and the explosive uptake of ChatGPT have captured mainstream attention.

Now, the United Nations (UN) aims to impose order on this expanding landscape. Secretary General Ant贸nio Guterres鈥攁 prominent voice in calling for a of emerging foundational AI models鈥攊nitiated a to be finalized alongside this September鈥檚 UN General Assembly. Last year, to explore how AI fits into this compact, he appointed international experts to a (UNAB). In short order, this panel issued a promising , which framed an approach that would be 鈥渁gile, networked, flexible鈥 and 鈥渕ake the most of the initiatives already underway.鈥 A final report planned before the September summit is past due.

A leaked in July fails to adhere to the approach promised in the interim report. Beneficial parts of the draft emphasize the UN鈥檚 indispensable role in promoting access and capacity-building to ensure that the benefits of AI are distributed globally. But the draft goes off course in proposing a broad role in AI policy for the UN Secretariat as a superstructure for AI governance functions already underway in multiple channels. Involving a body of 193 nations, with such widely differing interests, in many of these functions is a poor prescription for agility and flexibility, a distraction from the critical path of broadening AI capabilities, and an invitation to accentuate geopolitical divides around AI.

The expanding universe of AI governance

The UN effort adds to spreading constellations of governance, networks with many hubs and nodes and diverse interconnections. After its 2018 initial statement, the G7 established a (GPAI) to put responsible AI principles into practice, eventually involving international experts and 29 governments. The Organization for Economic Co-operation and Development (OECD) has been deeply involved with AI, adopting in 2019, establishing an to track national policies and AI incidents, , and developing now incorporated into European Union law and UN General Assembly resolutions, among other policies. Within the UN, the International Telecommunications Union has convened annual since 2017, and UNESCO AI ethics recommendations in 2022. Last year, China launched 鈥渁 .鈥 This year, the Council of Europe arrived at the , and African Union ministers endorsed an and to position Africa for AI adoption and participation in global governance.

These broad efforts have produced more focused initiatives. A Japanese project in the G7 converged with U.S.-EU discussions and with the White House secured from leading developers of foundational AI models to produce the 鈥.鈥 This code and a broader set of principles for all AI developers were by the G7 and later, on , by a growing 53-nation 鈥.鈥 The OECD and GPAI recently to combine resources and engage with a broader group of nations on AI work. Last October, the convened by the United Kingdom and the ensuing Seoul summit spawned 鈥溾 in the U.K., U.S., South Korea, Canada, Japan, and France, among others, to develop testing and monitoring for emerging AI models. In addition to these various intergovernmental forums, several international standards development organizations, especially the and the , have adopted numerous technical standards for AI.

There is no shortage of governance actions

As the UNAB draft report says, 鈥渢here is no shortage of documents and dialogues focused on governance.鈥 It nevertheless concludes that 鈥渁 global governance deficit with respect to AI鈥 exists. The rationale for this conclusion is thin.

The draft report dwells on the point that, of seven major international governance instruments outside the UN, just seven countries鈥攖he G7 members鈥攁re parties to all of them, while 118 are in none. However, rather than being an ominous development, it makes sense that G7 members have taken the lead; as the countries where AI has advanced the most, they face the most pressing need to act and have the power and resources to do so. In smaller, like-minded groups, they have been able to move with greater speed and achieve more concrete outcomes than a body with 193 members of very disparate interests would be able to.

In turn, the outcomes of these more focused efforts are translating into concrete effect as the EU , Canada debates , U.S. federal agencies implement President 麻豆官网首页入口免费鈥檚 to deploy AI with care for safety and individual rights, and a U.K.-appointed panel of international experts has produced an initial on the safety of advanced AI. Similarly, the OECD has proven its ability to make progress on AI policy. Its union with GPAI shows that the OECD鈥檚 research-based definitions and data have proven valuable in informing AI governance. Furthermore, the OECD has developed an effective track record in bringing stakeholders into policy development in contrast to the UN鈥檚 member-state-driven process.

In addition, networks and nodes of AI governance are wider and more robust than the draft report gives credit for. Its counting of countries overlooks ways that non-G7, non-OECD countries participate in current governance efforts. The G7 includes the European Union as well as various leaders from the and in its summits, providing indirect participation to states that are not G7 members. Together, the OECD, GPAI, the Hiroshima Friends Group, and participants in the U.K. Safety Summit and its 鈥淏letchley Declaration鈥 include a significant number of countries from the Global South: Argentina, Brazil, Brunei, Chile, China, Colombia, Costa Rica, India, Kenya, Laos, Mexico, Nigeria, Rwanda, the Philippines, Saudi Arabia, Singapore, Thailand, Turkey, and the UAE. All UN member states participated in on AI ethics. And those 450 AI governance initiatives mentioned earlier come not only from international bodies, but also from civil society organizations, corporations, and academia.

Governance should evolve, not descend from on high

The UNAB report declares that a governance deficit exists because 鈥渢he patchwork of norms and institutions is still nascent and full of gaps,鈥 and the existing initiatives cannot be 鈥渢ruly global in reach and comprehensive in coverage.鈥 This presumes that AI calls for governance on a global basis, and that such governance must be comprehensive. The report insists that 鈥渢he imperative of global governance鈥s irrefutable,鈥 but it does not establish why.

It is quite correct that AI has global dimensions that require international cooperation. I co-lead a small but early dialogue on global governance premised on the need for global cooperation and alignment on AI policy and development. As I wrote along with colleagues in a 2021 report, 鈥渘o one country acting alone can make ethical AI pervasive, leverage the scale of resources needed to realize the full benefits of AI innovation, and ensure that advances from developing AI systems can be made available to users in all countries in an open and nondiscriminatory trading system.鈥 However, global cooperation is not the same as global governance. And the array of collaborative frameworks and projects demonstrates that the value of cooperation around AI is widely understood, with a remarkably high level of cooperation at this early stage.

A key insight of the UNAB鈥檚 2023 was that consideration of AI governance must begin by establishing what functions governance performs and where the gaps are. While the draft report identifies clear gaps in realizing the opportunities of AI, it does not specify what functions relating to identifying and mitigating the risks or to aligning national AI policies are not already being performed. Although the report declines to recommend establishment of an international governmental body or governance of 鈥渁ll aspects of AI,鈥 it does suggest several UN functions that sound like setting global rules for AI.

One proposed function is a 鈥減olicy forum鈥 of member states to 鈥渇oster interoperable governance approaches,鈥 including safety standards, rights, and norms that would 鈥渟et the framework for global cooperation.鈥 To drive soft law development, the report suggests a standards exchange that would operate not just as a resource on the bottom-up work of international standards bodies, but as a body to evaluate the standards themselves and identify where additional standards are needed. For capacity-building, it recommends a global fund used not only to expand access, but also to provide sandboxes and tools to test 鈥済overnance, safety, and interoperability solutions.鈥

While an AI Office within the UN Secretariat may make sense to align efforts across UN bodies and conduct outreach to stakeholders, its mission should be as a facilitator rather than a policymaker.

The strength of networks

What the UNAB draft report describes as a 鈥減atchwork鈥 is what others have called 鈥渞egime complexes,鈥 or, as Nobel Prize economist Elinor Ostrom put it, 鈥.鈥 Such networks create fluid space to build coalitions on the wide range of issues presented by AI, iterate on information and solutions, and distribute functions where the greatest capacity and energy exists. As with other networks, various nodes and hubs provide multiple pathways that protect against failure points and speed up the transmission of ideas. The rapid development and endorsement of the Hiroshima AI Process is an example of an iterative back-and-forth that can advance governance step-by-step. Far more than what is proposed in the UNAB final draft, existing systems meet the initial draft鈥檚 goal of being agile, networked, and flexible.

The diverse centers of effort involved provide distributed and iterative solutions to the complexity of AI. AI differs from other subjects which serve as areas of reference for approaches to global issues, such as nuclear power and climate change. AI is a general-purpose technology with multidimensional attributes that are only beginning to be discovered, and it operates at unprecedented scale and evolves rapidly. Unlike climate change, where the broad membership of IPCC can project from a vast accumulation of weather and climate observations across time and space, most of the data about AI is unavailable or unknown other than to a relatively small group of experts. The developing field of AI is far from ripe for centralized governance.

The UNAB is absolutely right to highlight the potential for AI to accentuate digital divides among countries and within societies. It鈥檚 evident that the winner-take-all effect of technology has , and the scale, speed, and versatility of AI for societies with limited access to the technology or the skills to adapt it. This makes the UN鈥檚 development mission compelling and its broad membership a decisive comparative advantage on the alignment of national policies.

In the end, the UN member states will decide what goes into the Global Digital Compact. UN General Assembly resolutions on AI in (led by the U.S.) and (initiated by China) have featured the development mission as the key priority, and the of the Global Digital Compact in July is more agnostic in its AI outcomes than the draft UNAB recommendations. For many member states, leveraging the benefits of AI鈥攖o improve their productivity and ensure that they are not left behind鈥攍oom larger than the safety and ethics issues that dominate many AI policy salons. This backdrop provides some hope that, rather than dabble in managing AI risks and safety or the alignment of national AI policies and laws, as proposed in the UNAB draft report, the General Assembly will focus relentlessly on the UN鈥檚 critical role in expanding access and capacity to enjoy the promise of AI.

Author