Radiant Intel
Back to Analysis
December 7, 2023

UK AI Summit Debrief Part 2: The State of International AI Safety Governance

By Michael Frank

Originally published at CSIS

In a landmark effort to shape global artificial intelligence (AI) safety governance, on November 1 and 2, the UK government convened the first international AI Safety Summit. This initiative marks a significant step towards collaborative international efforts in AI governance and safety. Part 1 of this debrief explained how the summit framed AI safety. Part 2 charts the path forward for AI safety governance after the summit.

The primary output of the summit is the Bletchley Declaration, a statement of intent for global AI safety governance. There are five main principles represented in the declaration:

  1. Ensuring human-centric, trustworthy, and responsible AI
  2. Adopting regulatory frameworks that balance risks with benefits
  3. Collaborating across countries and sectors to help strengthen AI diffusion in developing countries
  4. Calling on frontier AI developers to commit to safety testing, evaluations, transparency and accountability
  5. Identifying safety risks of shared concern

In organizing the summit, the UK government envisioned its role as carving out a distinct dialogue that could transcend commercial or strategic competition. Positive reviews of the summit suggest the government achieved both of those roles in the short term. China’s inclusion in the summit is a boon to international AI safety, and their participation in the Bletchley Declaration establishes a common language with which to discuss shared interests.

In the long term, it is difficult to see the United Kingdom diverging enough from the United States to be able to play the role of an independent arbiter in the global AI governance landscape. The two allies have broad alignment on technology and security policy, deep research ties, and common commercial interest in AI as hosts of the vast majority of leading labs. While it would be advantageous to both the United Kingdom and the United States to see the former emerge as the preeminent international convener on AI Safety issues, the European Union and China could challenge that role given the extensive overlap of American and British interests.

More immediately, the mantle will now pass to South Korea and France.

The South Korea and France Safety Summits

Along with the United Kingdom, South Korea will cohost a “mini virtual summit” sometime in the Spring of 2024. The event is intended to monitor progress on the principles in the Bletchley Declaration, feeding into the subsequent summit in France in the fall. South Korean science and information and communication technology (ICT) minister Lee Jong-ho delivered a speech on the first day of the summit, calling for the establishment of an international AI organization under the United Nations. It will be interesting to see if there is any convergence over the next six months on the question of whether a new AI international institution is necessary.

In fall 2023 France will host the second in-person AI Safety Summit. France is an intriguing AI governance actor that has so far influenced international governance primarily through its membership in the European Union and G7. However, there are three reasons to anticipate a larger role for France that is independent of those two institutions.

First, France is one of the only countries with a potentially world-leading AI company. Mistral AI, founded by alumni from Google DeepMind and Meta, raised $113 million at a $260 million valuation in its first month of existence. Mistral AI is attempting to differentiate itself from the other labs at that level of capitalization by focusing on smaller, more efficient training methods, open-sourcing its models, and primarily focusing on serving businesses rather than consumers. Second, with the incorporation of Mistral AI, Europe now has a technology national champion, which has been an elusive goal for the French and German governments in recent years. Third, France has been assertive in supporting domestic AI development, with public seed funding available alongside private co-funding as part of the National Strategy for AI. Open-source AI has been a core element of that strategy.

[Follow link for full analysis]