Inside Huawei’s plan to make thousands of AI chips think like one computer

Imagine connecting thousands of powerful AI chips scattered in dozens of server cabinets and making them work together as if they were a single, massive computer. That is exactly what Huawei demonstrated at HUAWEI CONNECT 2025, where the company unveiled a breakthrough in AI infrastructure architecture that could reshape how the world builds and scales artificial intelligence systems.

Instead of traditional approaches where individual servers work somewhat independently, Huawei’s new SuperPoD technology creates what the company’s executives describe as a single logical machine made from thousands of separate processing units, allowing them, or it, to “learn, think, and reason as one.”

The implications extend beyond impressive technical specifications, representing a shift in how AI computing power can be organised, scaled, and deployed in industries.

The technical foundation: UnifiedBus 2.0

At the core of Huawei’s infrastructure approach is UnifiedBus (UB). Yang Chaobin, Huawei’s Director of the Board and CEO of the ICT Business Group, explained that “Huawei has developed the groundbreaking SuperPoD architecture based on our UnifiedBus interconnect protocol. The architecture deeply interconnects physical servers so that they can learn, think, and reason like a single logical server.”

The technical specifications reveal the scope of this achievement. The UnifiedBus protocol addresses two challenges that, historically, have limited large-scale AI computing: the reliability of long-range communications and bandwidth-latency. Traditional copper connections provide high bandwidth but only over short distances, typically connecting perhaps two cabinets.

Optical cables support longer range but suffer from reliability issues that become more problematic the greater the distance and scale. Eric Xu, Huawei’s Deputy Chairman and Rotating Chairman, said that solving these fundamental connectivity challenges was essential to the company’s AI infrastructure strategy.

Xu detailed the breakthrough solutions in terms of the OSI model: “We have built reliability into every layer of our interconnect protocol, from the physical layer and data link layer, all the way up to the network and transmission layers. There is 100-ns-level fault detection and protection switching on optical paths, making any intermittent disconnections or faults of optical modules imperceptible at the application layer.”

SuperPoD architecture: Scale and performance

The Atlas 950 SuperPoD represents the flagship implementation of this architecture, comprising of up to 8,192 Ascend 950DT chips in a configuration that Xu described as delivering “8 EFLOPS in FP8 and 16 EFLOPS in FP4. Its interconnect bandwidth will be 16 PB/s. This means that a single Atlas 950 SuperPoD will have an interconnect bandwidth over 10 times higher than the entire globe’s total peak internet bandwidth.”

The specifications are more than incremental improvements. The Atlas 950 SuperPoD occupies 160 cabinets in 1,000m2, with 128 compute cabinets and 32 comms cabinets linked with all-optical interconnects. The system’s memory capacity reaches 1,152 TB and maintains what Huawei claims is 2.1-microsecond latency in the entire system.

Later in the production pipeline will be the Atlas 960 SuperPoD, which is set to incorporate 15,488 Ascend 960 chips in 220 cabinets covering 2,200m2. Xu said it will deliver “30 EFLOPS in FP8 and 60 EFLOPS in FP4, and come with 4,460 TB of memory and 34 PB/s interconnect bandwidth.”

Beyond AI: General-purpose computing applications

The SuperPoD concept extends beyond AI workloads into general-purpose computing through the TaiShan 950 SuperPoD. Built on Kunpeng 950 processors, this system addresses enterprise challenges in replacing legacy mainframes and mid-range computers.

Xu positioned this as particularly relevant for the finance sector, where “the TaiShan 950 SuperPoD, combined with the distributed GaussDB, can serve as an ideal alternative, and replace — once and for all — mainframes, mid-range computers, and Oracle’s Exadata database servers.”

Open architecture strategy

Perhaps most significantly for the broader AI infrastructure market, Huawei announced the release of UnifiedBus 2.0 technical specifications as open standards. The decision reflects both strategic positioning and practical constraints.

Xu acknowledged that “the Chinese mainland will lag behind in semiconductor manufacturing process nodes for a relatively long time” and emphasised that “sustainable computing power can only be achieved with process nodes that are practically available.”

Yang framed the open approach as ecosystem building: “We are committed to our open-hardware and open-source-software approach that will help more partners develop their own industry-scenario-based SuperPoD solutions. This will accelerate developer innovation and foster a thriving ecosystem.”

The company is to open-source hardware and software components, with hardware including NPU modules, air-cooled and liquid-cooled blade servers, AI cards, CPU boards, and cascade cards. For software, Huawei committed to fully open-sourcing CANN compiler tools, Mind series application kits, and openPangu foundation models by 31 December 2025.

Market deployment and ecosystem impact

Real-world deployment provides validation for these technical claims. Over 300 Atlas 900 A3 SuperPoD units have already been shipped in 2025, which have been deployed for more than 20 customers from multiple sectors, including the Internet, finance, carrier, electricity, and manufacturing sectors.

The implications for the development of China’s AI infrastructure are substantial. By creating an open ecosystem around domestic technology, Huawei is addressing the challenges of building competitive AI infrastructure inside parameters set by constrained semiconductor manufacturing and availability. Its approach enables broader industry participation in developing AI infrastructure solutions without needing access to the most advanced process nodes.

For the global AI infrastructure market, Huawei’s open architecture strategy introduces an alternative to the tightly integrated, proprietary hardware and software approach dominant among Western competitors. Whether the ecosystem proposed by Huawei can achieve comparable performance and maintain commercial viability remains to be demonstrated at scale.

Ultimately, the SuperPoD architecture represents more than an incremental advance for AI computing. Huawei is proposing a fundamental of how massive computational resources are connected, managed, and scaled. The open-source release of its specifications and elements will test whether collaborative development can accelerate AI infrastructure innovation in an ecosystem of partners. That has the potential to reshape competitive dynamics in the global AI infrastructure market.

See also: Huawei commits to training 30,000 Malaysian AI professionals as local tech ecosystem expands

Banner for the AI & Big Data Expo event series.

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and is co-located with other leading technology events, click here for more information.

AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here.

The post Inside Huawei’s plan to make thousands of AI chips think like one computer appeared first on AI News.



source https://www.artificialintelligence-news.com/news/huawei-ai-chips-superpod-technology/

Post a Comment

Previous Post Next Post