Skip to content

Huawei’s SuperPoD Architecture: Revolutionizing AI Infrastructure

Huawei’s SuperPoD Architecture: Revolutionizing AI Infrastructure

Imagine connecting thousands of smart chips spread across dozens of server cabinets, making them work together as if they were a single massive computer. This is exactly what Huawei demonstrated at HUAWEI CONNECT 2025, where the company revealed significant advancements in AI infrastructure architecture that could reshape the way AI systems are built and expanded worldwide.

Core Technology: UnifiedBus 2.0

At the heart of Huawei’s infrastructure approach is the UnifiedBus (UB). Yang Chaobin, Director of the Board and CEO of Huawei’s ICT Business Group, explained, “Huawei has developed the leading SuperPoD architecture based on the UnifiedBus communication protocol. This framework allows for deep connectivity between physical servers so they can learn, think, and deduce as a single logical server.”

The UnifiedBus protocol addresses two historical challenges that have limited large-scale AI computing: long-range communication reliability and time scale. Traditional copper communications offer high bandwidth but only over short distances, connecting perhaps two server cabinets.

Optical cables support longer ranges but suffer from reliability issues that become more complex with increased distance and scale. Eric Xu, Deputy Chairman and Rotating Chairman of Huawei, stated that solving these fundamental communication challenges was essential to the company’s AI infrastructure strategy.

SuperPoD Architecture: Capacity and Performance

The Atlas 950 SuperPoD is the primary implementation of this architecture, consisting of up to 8,192 Ascend 950DT chips in a configuration Xu described as providing “8 EFLOPS in FP8 and 16 EFLOPS in FP4. The communication bandwidth will be 16 PB/s. This means a single Atlas 950 SuperPoD will have over 10 times the total bandwidth of the world’s top internet bandwidth.”

The technical features are not just incremental improvements. The Atlas 950 SuperPoD occupies 160 cabinets in a 1,000 square meter space, with 128 computing cabinets and 32 communication cabinets all connected via full optical connections. The system’s memory capacity reaches 1,152 terabytes, and Huawei maintains that the system-wide latency is 2.1 microseconds.

In a later production stage, the Atlas 960 SuperPoD is set to include 15,488 Ascend 960 chips in 220 cabinets covering 2,200 square meters. Xu stated it will offer “30 EFLOPS in FP8 and 60 EFLOPS in FP4, with memory up to 4,460 terabytes and a communication bandwidth of 34 PB/s.”

Beyond AI: General Computing Applications

The SuperPoD concept extends beyond AI workloads to general computing with the TaiShan 950 SuperPoD. Built on Kunpeng 950 processors, this system addresses corporate challenges in replacing old mainframes and mid-range computers.

Xu positioned this as particularly relevant to the financial sector, where “the TaiShan 950 SuperPoD, combined with the GaussDB distributed database, can be an ideal replacement, supplanting mainframes, mid-range computers, and Oracle’s Exadata database servers.”

Open Architecture Strategy

Perhaps most importantly for the broader AI infrastructure market, Huawei announced the release of the UnifiedBus 2.0 technical specifications as open standards. The decision reflects both strategic positioning and practical constraints.

Xu acknowledged that “mainland China will lag in semiconductor manufacturing for a long time” and emphasized that “sustainable computing power can only be achieved with practically available process manufacturing nodes.”

Yang framed the open approach as building an ecosystem: “We are committed to our open hardware and open-source software approach that will help more partners develop their own SuperPoD solutions based on industry scenarios. This will accelerate innovation among developers and foster a thriving ecosystem.”

Conclusion

The SuperPoD architecture offers more than just incremental advancements in intelligent computing. Huawei proposes a new foundation for how massive computing resources are connected, managed, and expanded. Its open release of specifications and components will test whether collaborative development can accelerate AI infrastructure innovation within an ecosystem of partners. This has the potential to reshape competitive dynamics in the global AI infrastructure market.