AI, China, and Leverage
America’s semiconductor export controls are intended to slow China’s development of artificial intelligence (AI). This strategy has yielded a compelling irony: the more we try to squeeze China’s access to semiconductors now, the more it erodes American leverage in a future crisis.
If the paramount objective is to deter China’s behavior that harms American interests—for example, a blockade or invasion of Taiwan—then coercive leverage is a superior goal to immediate technology denial.
Thomas Schelling wrote:
“Suffering requires a victim that can feel pain or has something to lose. To inflict suffering gains nothing and saves nothing directly; it can only make people behave to avoid it. The only purpose, unless sport or revenge, must be to influence somebody’s behavior, to coerce his decision or choice. To be coercive, violence has to be anticipated. And it has to be avoidable by accommodation. The power to hurt is bargaining power. To exploit it is diplomacy.”
The principle explains why policies that advocate for a blanket export ban of advanced semiconductors are so shortsighted. It would dramatically hinder the disruptive potential of American technology. America’s China policy needs to be committed to power politics: deterrence, coercion, and power projection. That requires a ruthless dedication to acquiring and maintaining leverage.
A sanction or cutoff is most valuable when held in reserve as a credible threat. Like a new car driving off a dealership lot, it begins to lose value almost immediately. As it sorts out how to evolve beyond Biden-era policies on China and AI, the Trump administration should reconsider the approach that seeks baseline degradation of China’s AI ecosystem and instead prioritize establishing and maintaining weaponized interdependence. To the greatest extent possible, Chinese entities must be reliant on American technology ecosystems. Only then will America be able to deter or coerce behavior that runs counter to our interests in the moment of maximum consequence.
This argument runs counter to the prevailing consensus about what constitutes the most hardline policy towards China’s AI ecosystem, but it is built on real-world observations. Export controls on chips have not stopped China’s AI progress. Instead, China’s foundational and applied AI capabilities continue to scale. Whether that progress is in spite of American export controls—or because of them—is up for debate, but the evidence increasingly supports the latter. The controls have succeeded in one area: hobbling China’s ability to manufacture its own cutting-edge chips. That success means Chinese AI deployments remain dependent on American technology. That dependency is worth cultivating, not eliminating.
Preserving leverage includes some short-term risk—but the results justify taking this risk. In allowing China-based entities access to American technologies—not at the cutting-edge, but better than what they can get from Chinese suppliers—America preserves Schelling’s “power to hurt.” The goal of American policy should not be to stymie China’s AI development today, which is unrealistic. It should be to develop leverage that could hobble China’s AI ecosystem in a crisis, and ensure China knows about that leverage so it avoids taking actions hostile to American interests. A leverage-focused strategy that emphasizes ecosystem integration and dependency will better serve American interests than an all-out campaign to cripple China’s AI capabilities.
It is important to honestly evaluate what chip controls have and have not accomplished; why the broader AI stack matters more than any single chip; how export controls have worked in manufacturing but not in deployment; and how the United States can weaponize interdependence to maximize strategic leverage.
Chips controls aren’t slowing down foundational or applied AI in China because the stack matters more than the chip
Export controls have succeeded in blocking Chinese semiconductor manufacturers from developing state of the art chips. They have not prevented China’s AI practitioners from advancing the state of the art. By all empirical measures, Chinese companies and research labs continue to train and deploy sophisticated AI models, often matching American counterparts in benchmark performance. Chris Miller recently highlighted the 2025 AI Index report that showed Chinese AI labs converging on American peer capabilities across general language (MMLU), general reasoning (MMMU), mathematical reasoning (MATH) and coding (HumanEval), which “illustrates that Chinese AI labs are, at worst, fast followers in terms of model capabilities.”
How did this happen? Chinese firms have plenty of substitutes and workarounds at their disposal. There are plenty of reports of circumvention, including smuggling and illicit procurement, that illustrate how much more difficult it is to execute the export control regime in practice than it is in theory. But let’s assume for a minute that there is a way for chips controls to avoid these problems. Training and running large AI models don’t require a country to manufacture its own GPUs or import the best. Imported hardware, rented cloud access abroad, or inferior chips running in parallel can get the job done just fine. Training large language models can be done across multiple data centers or regions. That approach is increasingly viable and will become the norm in the future.
China’s AI research and commercial deployment timeline has not been derailed. Major Chinese tech firms still rolled out AI-driven products in 2023-25 on schedule. They may have done so using 7nm chips instead of 5nm, or using twice the number of chips to compensate for lower per-chip performance. The end capability delivered to users is similar. From a strategic perspective, preventing China from deploying any AI would require draconian measures far beyond export controls—essentially an impossible task short of disconnecting China entirely from the global internet and tech economy. Short of that, the controls were bound to be leaky. The US government has had to continuously patch the rules (e.g. closing the cloud access loophole, updating chip specs in 2023 and 2024) because Chinese firms kept finding ways to get what they needed. Despite these patches, significant failures occurred. For example, TSMC unwittingly manufactured millions of chips for Huawei via intermediaries. While export controls achieved a core goal in slowing Chinese chip manufacturing, they “did not kill China’s AI ecosystem” as some had assumed would happen. Applied AI in China is alive and well.
The chip fixation is causing America to neglect the rest of the stack
Looking to the rest of the stack, it becomes even clearer why a fixation on chips is inconsistent with practical experience. Energy, infrastructure, models and applications are just as important as chips when it comes to “doing AI”. Energy is arguably America’s biggest AI problem right now. Industry leaders are sounding the alarm about real constraints. Satya Nadella, CEO of Microsoft, recently dismissed a concern about chip supply and explained how power is their primary bottleneck:
The biggest issue we are now having is not a compute glut, but it's power, and the ability to get the builds done fast enough close to power. So if you can't do that, you may actually have a bunch of chips sitting in the inventory that I can't plug in. In fact, that is my problem today. It's not a supply issue of chips. It's actually the fact that I don't have warm shells to plug into.
The Trump Administration recently announced the construction of 10 new nuclear power plants, totaling about 10 giga watts (GW) of new power generation. An executive order shortened permitting times to 18 months and still the plants won’t be online for at least five years. Meanwhile, China installed 198 GW of solar capacity in the first five months of 2025 alone. This is not a competition. China is running laps around the United States and there Is no indication of a plan to catch up.
That leads to the infrastructure problem that Nadella described. Warm shells don’t just need energy. First, they need a cold shell. As is evident in the languid pace of new power generation, America has a whole bunch of problems related to construction that put America at a significant disadvantage.
Models are the part of the stack where the first actual evidence of AI capability starts to emerge. There, Chinese developers are proving they can compete just fine. Speaking at GTC 2025 in Washington on October 28, Jensen Huang pulled up a chart of open source models by usage. Chinese models are dominating open source. In 2025, DeepSeek has produced numerous cutting-edge innovations that dramatically improve inference cost, such as dynamic context windows and making context ingestion more efficient. There are emerging anecdotes that a majority of venture-backed American AI startups are now using open source Chinese models. Just because American models appear to be at the frontier in terms of benchmarks does not mean China is not thoroughly competitive in models.
Sitting on top it all are the applications that are just starting to emerge. Many applications are agentic AI that use LLMs-as-infrastructure. Startups are hoping to serve the government to deliver novel applications of AI for national security, but the environment in Washington towards AI startups remains aloof, at best. Congress had an opportunity to protect the nascent applied AI industry with a federal moratorium on state-level regulation, but instead yielded to parochial and misguided concerns about risk.
In short, when you look at the whole tech stack, there are more pressing competitive concerns than chips. It’s important to remember that the stack is a means to AI applications, which itself is a means to power projection including ships, aircraft, missiles, drones, robots, cyber, space, carriers, human capital, leadership and strategic culture, and nuclear weapons. That’s not to say the individual aspects of the tech stack aren’t important. It’s actually worse than that. In blocking American companies like Nvidia from competing in the Chinese AI ecosystem, we voluntarily cede one of the last pieces of leverage we might have in a crisis scenario. China recognizes that, which is why they have reportedly taken steps to make state funding of data centers contingent on deploying native chips.
The AI boom is an opportunity to cultivate leverage over the Chinese AI ecosystem
The choice is not between “preventing advanced Chinese AI” and “allowing advanced Chinese AI”. China currently has advanced AI systems. The choice is between competing to get those workloads on allied and American systems, or ceding all of them to Huawei.
Rather than individual slices of the tech stack, the entire AI ecosystem is the competitive domain. For example, CUDA is NVIDIA’s software “railroad”, a parallel‑computing platform and programming model bundled with a deep stack of production‑grade libraries, compilers and toolchains, and inference runtimes like TensorRT‑LLM. Most modern AI frameworks and a huge share of model repos assume CUDA by default, which means performance, reliability, and developer ergonomics are delivered at the stack level, not the chip spec. Each generation of silicon instantly inherits a mature software ecosystem and an installed base that is costly to leave. Porting to alternatives is slow, expensive, and operationally risky.
For national security, that same moat is a leverage engine. The more Chinese developers, enterprises, and clouds that build on CUDA, the higher the switching costs off American rails—and the greater the practical control Washington can exercise (through licensing terms, support/update dependencies, and cloud KYC/attestation) if it must impose a crisis cutoff under lawful authorities. In short, chip sales are the on‑ramp; CUDA adoption is the dependency, and dependency is what becomes leverage. Rather than striving for complete disentanglement—which yields no leverage—the United States should embrace asymmetric interdependence centered around CUDA.
If the objective is leverage in a crisis—not performative deprivation today—then policy should stop trying to degrade China’s AI in the abstract and start maximizing China’s lawful dependence on American rails below the frontier. Chip export controls have clearly bitten in manufacturing. They have not crippled Chinese deployment. The true frontier should stay out of China, but it is a national security imperative that Nvidia competes in the ecosystem with sub‑frontier systems that come with software, cloud governance, and license terms that keep Chinese workloads, tools, and talent on CUDA. There is still space for targeted end‑use or entity enforcement against the PLA. Economy‑wide bans sound appealing but actually collapse our visibility and accelerate substitution. Senior diplomats can quietly communicate American interests (such as a Taiwan blockade) that would trigger an immediate, ecosystem‑wide cutoff.
Don’t brandish the weapon in peacetime. Preserve it.