Radiant Intel
Back to Analysis
November 20, 2023

UK AI Summit Debrief Part 1: The Framing of AI Risk

By Michael Frank

Originally published at CSIS

The November 2 conversation between British prime minister Rishi Sunak and Elon Musk, ostensibly a distinct event from the UK AI Safety Summit, was a representative conclusion. Like the summit at Bletchley Park, the Musk-Sunak meeting elevated long-term artificial intelligence (AI) risks to the forefront of the AI safety discussion while also generating its fair share of controversy. In the end, perhaps unlike the Musk-Sunak conversation, the summit was a substantive contribution to the development of global AI governance.

The UK government was precise in defining the scope of the summit as focusing on safety of foundation models—broad AI tools that can adapt to a variety of applications—at the research frontier, with a broad group of countries from each region and income group. UK officials referenced the need for that specific conversation in the context of existing international AI dialogues, such as the G7 Hiroshima Process, Organization for Economic Cooperation and Development (OECD) AI Principles, and the Global Partnership on AI (GPAI), which either primarily focus on other aspects of AI governance or include advanced economies. The countries included at the summit that are not represented in either the G7, OECD or GPAI were China, Indonesia, Kenya, Saudi Arabia, Nigeria, Philippines, Rwanda, Ukraine, and United Arab Emirates.

The four primary risk categories discussed were (1) AI misuse, primarily among non-state actors, (2) unexpected capabilities at the AI frontier, (3) “rogue AI” and superalignment, (4) AI safety and societal diffusion.

This commentary summarizes the summit discussion around each risk category and sets it in a broader policy context of other political, technological, and diplomatic developments.

AI Misuse, Primarily Among Non-state Actors

Frontier AI misuse, set within the context of global risk and governance, pertains to the inappropriate or dangerous application of advanced AI technologies that pose significant threats to global security and stability. This encompasses the potential for cyberattacks, manipulation of information, and other harmful uses that can undermine international safety protocols, biosecurity, and cybersecurity.

Biosecurity in particular stands out as being an area of common focus in international governance. The Biden administration’s Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence, announced in the days preceding the summit, is a broad, sprawling document that reserves specificity for a small number of issues. Biosecurity is one of those exceptions. Models that are trained or deployed primarily on biological sequence data have a lower computing power threshold to trigger mandatory reporting than all other categories of AI models. Researchers at RAND Corporation are in the midst of a study on the real-world impact of large language models on bioweapons engineering, comparing model capabilities with information already available on the internet through conventional search.

Of the countries represented at Bletchley Park, all are parties to the Biological Weapons Convention, which “prohibits the development, production, acquisition, transfer, stockpiling and use of biological and toxin weapons.” Consensus on safety measures that preclude the development of dangerous biological agents should be reasonably easy to reach.

Unexpected Capabilities at the AI Frontier

The technological frontier is defined by rapid and often unforeseen developments in AI technologies, where the abilities of AI systems exceed expectations and previously established predictions. These advances are characterized by their potential to bring significant benefits in various fields, such as health, education, and environmental science, while simultaneously posing considerable risks due to their unforeseen nature and potential for misuse. The unpredictability stems from the rapid scaling of AI models, their ability to connect with other systems, and the sheer number of possible permutations of their applications, making it challenging to anticipate all potential outcomes and implications before their deployment.

[Follow link for full analysis]