They said it couldn’t be done without NVIDIA. They said China was at least “three years behind.” They were wrong.

In a move that should send absolute shivers down the spine of every policy hawk in Washington (and every shareholder at NVIDIA), Zhipu AI has effectively declared the “US Sanction Wall” broken.

They haven’t just released another model; they have released GLM-Image, a state-of-the-art multimodal giant, trained entirely on Huawei’s domestic Ascend 910B chips. This isn’t a prototype. This isn’t a toy. This is a full-scale, commercial-grade training run that proves the Chinese AI ecosystem has successfully decoupled from the West.

The “Impossible” Benchmark

The "Impossible" Benchmark

For the last two years, the entire premise of US export controls was simple: deny access to H100s and A100s, and you starve the dragon of compute. The theory was that even if China could build a chip, they couldn’t build the software stack to run it at scale without crashing every few hours.

Zhipu AI just destroyed that theory.

Using Huawei’s Atlas 800T A2 servers, Zhipu completed the entire training pipeline—from massive dataset preprocessing to final fine-tuning—using the Ascend 910B processor. But the real story here isn’t the hardware; it’s the software. They ditched CUDA entirely for Huawei’s MindSpore framework, achieving training stability that they claim “approaches the practical limits” of the hardware.

Let that sink in. They aren’t just “making it work.” They are hitting hardware utilization rates that rival the best CUDA clusters. This echoes the efficiency breakthroughs we saw with Soprano 1.1-80M, where architectural optimization is beating raw brute force.

Why This Terrifies Washington (and NVIDIA)

Why This Terrifies Washington (and NVIDIA)

If this performance is real, the “chokehold” strategy is officially dead.

Washington’s gambit relied on the idea that China couldn’t replicate the ecosystem of NVIDIA. But necessity is the mother of all invention. By forcing Zhipu, Baidu, and others into a corner, the US inadvertently created a massive, captive market for Huawei to iterate its software stack at warp speed.

For NVIDIA, this is a nightmare scenario. We recently discussed China’s open-weight dominance, but that was largely assuming they were still secretly buying black-market H100s. Now? If Huawei’s Ascend stack is “good enough” for commercial training, NVIDIA loses one of its largest growth markets permanently. Once a developer ecosystem migrates to a new stack (like MindSpore), they rarely switch back.

The Technical Reality Check

Before we declare “Game Over,” let’s look at the specs objectively. The Ascend 910B is roughly comparable to an NVIDIA A100 in raw FLOPS, but it historically suffered from poor interconnect bandwidth (the “C” in “HPC”).

Zhipu’s success suggests they have solved the distributed training stability problem. Training a large multimodal model requires thousands of chips talking to each other perfectly for weeks. If a single chip fails, the run crashes. Zhipu’s ability to finish this run implies that Huawei’s CANN (Compute Architecture for Neural Networks) software has matured significantly.

They are essentially doing what we predicted in our Intelligence Explosion analysis: optimizing the entire stack from the kernel up to overcome hardware limitations.

What This Means For You

We are entering a bifurcated world of AI.

Zone A (The West): Built on NVIDIA/AMD, CUDA, and PyTorch. High cost, high performance, restricted access.

Zone B (The East): Built on Huawei Ascend, MindSpore, and CANN. Lower raw performance per chip, but massive scale and zero* restrictions.

For developers, this means the flood of high-quality, open-weight models from China (like Qwen, DeepSeek, and now GLM) isn’t stopping. In fact, it’s going to accelerate. These models are being built on hardware that we can’t sanction, using code we can’t control.

The wall hasn’t just developed a crack. It has collapsed.

FAQ

Is the Ascend 910B actually better than the H100?

No. In raw performance, the NVIDIA H100 is still superior. However, the Ascend 910B is “good enough” (comparable to A100), and crucially, it is available in unlimited quantities to Chinese firms, whereas the H100 is banned.

What is MindSpore?

MindSpore is Huawei’s AI computing framework, designed to rival Facebook’s PyTorch and Google’s TensorFlow. It is optimized specifically for Ascend hardware, allowing for efficient training without CUDA.

Will Zhipu AI release GLM-Image globally?

Likely yes. Chinese labs have historically been very aggressive with open-source releases to build global influence, as seen with the Qwen and DeepSeek model families.

Categorized in:

AI, News,

Last Update: January 15, 2026