Radiant Intel
Back to Analysis
October 25, 2023

Four Lessons from Historical Tech Regulation to Aid AI Policymaking

By Michael Frank

Originally published at CSIS

Since Senate Majority Leader Chuck Schumer announced the SAFE Innovation Framework for AI legislation in June, there has been a steady advance in U.S. artificial intelligence (AI) policy. Federal regulators have declared that automation is not an excuse to run afoul of existing rules. They are not waiting for new legislation—they are instead pursuing logical extensions of existing authorities to form a patchwork of new rules relevant to AI. State governments are considering new bills that would regulate AI training—such as data protection and opt-out requirements—and use cases, such as profiling and facial recognition. As federal rulemaking and Congress’ AI Insight Forums proceed, it is worth considering how the experience of regulation in the early internet and social media eras could inform that work.

There are four lessons that could transcend the new AI era of technology policy, cutting across the regulation of many different sectors.

Lesson 1: The speech disputes of the social media era are unresolved, and content regulation is likely to shatter any bipartisan consensus on AI policy.

Social media regulation previously was not partisan. Social media companies enjoyed broad bipartisan support in their early years as innovators and job creators of the new economy. In 2015, 74 percent of Democrats and 72 percent of Republicans believed technology companies had a positive effect on the country. The 2016 election was a major inflection point, after which liberal and conservative opinions on regulation started to diverge. In a 2018 survey, a majority of Republicans (64 percent) believed social media platforms supported liberal views over conservatives. The points of no return are Twitter and Facebook’s 2021 decision to ban Donald Trump, and the proliferation of vaccine misinformation, which prompted the development of a rigid dichotomy across the parties. Democrats’ interests in social media regulation now revolve around amending Section 230 to remove certain platform legal immunities related to advertising, civil rights, harassment, wrongful death, and human rights. Another effort from Senator Amy Klobuchar, introduced in 2021 but since stalled, would carve out an exception to Section 230 platform immunities for health misinformation. Those measures would have the likely effect of broader content moderation. Republicans concerned about social media companies’ liberal bias would like to reform Section 230 in the opposite fashion, limiting or even removing liability protections that could put content moderation in direct conflict with the First Amendment and create a whole host of other intractable problems.

Polling suggests the public’s views are less nakedly partisan than Congress, but are rife with contradictions. There is broad agreement that policymakers should do something on social media regulation. According to Gallup and the Knight Foundation, only 3 percent of social media users agree with the statement “I trust the information I see on social media,” and 53 percent of social media users believe it has a negative impact on others like them, and 71 percent feel the internet divides the country.

Lesson 2: U.S. antitrust authorities are already empowered to promote competition in the AI market but may be reticent to exceed precedents established for consumer internet platform companies.

U.S. antitrust authorities have a robust non-price term framework, backed by case law and statutory law, to enforce competition actions against ICT platform companies offering “free” products in nebulous markets. Section 7 of the Clayton Antitrust Act determines antitrust authorities only need to prove a merger may substantially lessen competition—a lower threshold than demonstrating actual anti-competitive outcomes. In response to the Microsoft antitrust case in the late 1990s and early 2000s, former Federal Trade Commission (FTC) commissioner Orson Swindle testified that antitrust prosecution rests on the successful proof that a monopolist has abused market dominance to harm “consumer welfare in the form of higher prices, reduced output, and decreased innovation.” Regulators successfully applied that formula in ICT antitrust actions against AT&T and Microsoft. The 2023 updated Merger Guidelines from the Department of Justice and FTC state that non-price indicators may be useful in ‘free’ product markets. Antitrust authorities can use non-price data to argue antitrust targets understood how their actions would impact competition. A/B testing, internal firm experiments that test two different outcomes to determine the best course of action, is a common non-price data point in antitrust cases. The FTC referred to evidence from A/B testing in its successful 2023 lawsuit against Credit Karma, in which the latter was fined $3 million for misrepresenting the likelihood a customer would be approved for credit.

[Note: Link to full analysis available above]