Radiant Intel
Back to Analysis
July 10, 2023

Managing Existential Risk from AI without Undercutting Innovation

By Michael Frank

Originally published at CSIS

It is uncontroversial that the extinction of humanity is worth taking seriously. Perhaps that is why hundreds of (artificial intelligence) AI researchers and thought leaders signed on to the following statement: “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.” The Statement on AI Risk and the collective gravitas of its signatories has demanded the attention of leaders around the world for regulating AI—in particular, generative AI systems like OpenAI’s ChatGPT. The most advanced AI regulatory effort is the European Union, whose parliament recently passed its version of the Artificial Intelligence Act (AI Act). The AI Act’s proponents have suggested that rather than extinction, discrimination is the greater threat. To that end, the AI Act is primarily an exercise in risk classification, through which European policymakers are judging applications of AI as high-, limited-, or minimal-risk, while also banning certain applications they deem unacceptable, such as cognitive behavioral manipulation; social scoring based on behavior, socioeconomic status or personal characteristics; and real-time biometric identification from law enforcement.

The AI Act also includes regulatory oversight of “high-risk” applications like biometric identification in the private sector and management of critical infrastructure, while also providing oversight on relevant education and vocational training. It is a comprehensive package, which is also its main weakness: classifying risk through cross-sectoral legislation will do little to address existential risk or AI catastrophes while also limiting the ability to harness the benefits of AI, which have the potential to be equally astonishing. What is needed is an alternative regulatory approach that addresses the big risks without sacrificing those benefits.

Given the rapidly changing state of the technology and the nascent but extremely promising AI opportunity, policymakers should embrace a regulatory structure that balances innovation and opportunity with risk. While the European Union does not neglect innovation entirely, the risk-focused approach of the AI Act is incomplete. By contrast, the U.S. Congress appears headed toward such a balance. On June 21, Senate majority leader Chuck Schumer gave a speech at CSIS in which he announced his SAFE Innovation Framework for AI. In introducing the framework, he stated that “innovation must be our North Star,” indicating that while new AI regulation is almost certainly coming, Schumer and his bipartisan group of senators are committed to preserving innovation. In announcing the SAFE Innovation Framework, he identified four goals that forthcoming AI legislation should achieve:

  • Security: instilling guardrails to protect the U.S. against bad actors’ use of AI, while also preserving American economic security by preparing for, managing, and mitigating workforce disruption.
  • Accountability: promoting ethical practices that protect children, vulnerable populations, and intellectual property owners.
  • Democratic Foundations: programming algorithms that align with the values of human liberty, civil rights, and justice.
  • Explainability: transcending the black box problem by developing systems that explain how AI systems make decisions and reach conclusions.

Congress has an important role to play in addressing AI’s risks and empowering federal agencies to issue new rules and apply existing regulations where appropriate. Sending a message to the public—and to the world—that the U.S. government is focused on preventing AI catastrophes will inspire the confidence and trust necessary for further technological advancement.

AI is evolving rapidly. Regulators need to develop a framework that addresses risks as they evolve, too, while also fostering potentially transformative benefits. The implication is not that policymakers embrace unregulated AI. Undoubtedly, there should be guardrails to AI. As Schumer and his colleagues pursue their four goals, they should design regulation with four principles in mind: (1) preventing the establishment of anticompetitive regulatory moats for established companies; (2) focusing on resolving obvious gaps in existing law in ways that assuage concerns about existential risk from AI; 3) ensuring society can reap the benefits of AI; and 4) advancing “quick wins” in sector-specific regulation.

Preventing the establishment of anti-competitive regulatory moats for established companies.

Regulatory solutions should not preclude the development of a competitive AI ecosystem with many players. DeepMind and OpenAI, two of the leading AI companies, are 12 and 7 years old, respectively. They have an edge over the competition today because of the quality of their work. If they retain that competitive position 20 years from now, it should be because of their superior ability to deliver safe and transformative AI, not because regulations have created entrenched monopolies. Entrepreneurship remains at the heart of innovation. Many of the most transformative AI companies in this new era may not yet exist. Today’s technology titans like Facebook, Google, and Netflix were founded decades after the predecessor of the modern internet, and years after the 1993 launch of the World Wide Web in the public domain. The Federal Trade Commission (FTC) could clarify guidance on what would constitute anti-competitive mergers and acquisitions of AI Companies. An overtly pro-competitive stance from the FTC would help to encourage broad innovation and economic growth.

Focusing on resolving obvious gaps in existing law in ways that assuage concerns about existential risk from AI.

[Note: Article link for full reading provided above]