The Battle for the World’s AI Stack Has Already Begun

AI & Geopolitics

Nvidia’s Jensen Huang isn’t just talking about chip sales. He’s warning that the very foundation of global AI infrastructure is up for grabs—and America may be sleepwalking into losing it.

There’s a version of the future where the AI systems running across hospitals in Southeast Asia, factories in Africa, and government offices in Europe were all built on tools developed in Shenzhen—not Silicon Valley. Jensen Huang, the CEO of Nvidia, believes that the future is closer than most people realize.

In a recent conversation on the Dwarkesh Podcast, Huang delivered one of his most direct warnings yet about the direction of the US–China tech rivalry. His concern wasn’t about Nvidia’s quarterly revenues or chip export licenses. It was about something far more consequential: which country gets to set the rules for how artificial intelligence is built, deployed, and scaled across the planet.

The real threat isn’t hardware—it’s the software layer underneath it

Most coverage of the US-China chip war focuses on the silicon: who has the fastest processors, who can manufacture at what node size, and who is closer to catching up. Huang’s argument cuts deeper than that.

For the past decade, Nvidia’s dominance has rested not only on the performance of its chips but also on CUDA—the software framework that researchers, startups, and enterprises around the world use to build AI applications. When a team at a university writes training code for a new AI model, they write it for CUDA. When a cloud provider builds AI infrastructure, they buy Nvidia hardware because that’s what the software ecosystem demands. CUDA isn’t just a tool; it’s a standard—a shared language for the entire global AI industry.

“If future AI models are optimized in a very different way than the American tech stack… China will become superior to the US.” — Jensen Huang

Huang’s fear is that this standard could fracture. DeepSeek, China’s most capable AI lab, is reportedly working to optimize its next-generation models to run on Huawei’s Ascend chips using Huawei’s own software framework, CANN. If that effort succeeds at scale, it would represent something more significant than a technical milestone. It would be proof that world-class AI can be built entirely outside the American technology stack.

Why a parallel ecosystem is a bigger problem than you think

Once an alternative development environment establishes credibility, adoption tends to compound quickly. Researchers who build on CANN will publish papers using CANN. Startups building on top of DeepSeek’s models will inherit CANN dependencies. Governments buying AI infrastructure from Chinese providers will end up with systems that have no connection to American software or hardware standards.

Over time, this doesn’t just cut Nvidia out of a market. It shifts where AI innovation happens, what norms and priorities get embedded into AI systems, and which country has the leverage to influence how AI is governed globally. That is what Huang means when he warns about a “horrible outcome”—not a bad quarter, but a structural realignment of technological power.

Huang also pushed back against the assumption that export controls can prevent this. China already has abundant computing capacity, a deep pool of AI researchers, and enough energy resources to run large-scale training workloads. Blocking access to Nvidia’s best chips doesn’t eliminate China’s ability to develop competitive AI; it gives Chinese companies a stronger incentive to build the alternative infrastructure that would make Nvidia irrelevant in that part of the world entirely.

The uncomfortable logic of engagement

Huang’s argument is counterintuitive, and he knows it. He is, after all, the CEO of a company with direct commercial interests in selling chips to China. Critics are right to note that conflict of interest. But the logic he’s describing is not simply self-serving—it’s a coherent strategic position that deserves to be taken seriously.

His point is that keeping Chinese AI developers building on American tools — even if that means some commercial exchange — gives the US ongoing influence over the direction of AI development. An open-source AI ecosystem that runs on Chinese hardware and software, developed independently of any American input, would give the US far less leverage than a world where Chinese researchers are still contributing to and building upon the American technology stack.

Whether policymakers agree with that framing is a separate question. But the debate is no longer hypothetical. DeepSeek’s V4 model, expected to launch soon, may offer the first real-world test of whether competitive AI can be trained and deployed at scale without a single Nvidia GPU in the pipeline. The answer will shape the conversation about AI, chips, and geopolitics for years to come.

What this means for the rest of us

For anyone working in technology, business, or policy, the takeaway is simple: the competition for AI dominance is not just about who builds the fastest chip or trains the largest model. It’s about who builds the platforms, the frameworks, and the standards that everyone else builds on top of. That layer of the stack — largely invisible to end users — is where real long-term power accumulates.

Huang is betting that America still has the advantage in that race. But he’s also warning, loudly and publicly, that the advantage is not guaranteed—and that some of the policies meant to protect it may be accelerating its erosion instead.

Leave a Reply

Your email address will not be published. Required fields are marked *

Verified by MonsterInsights