Milestone in Domestic AI Computing Power: Sugon Deploys 3 Sets of ScaleX Ten-Thousand-Card Superclusters, Launching the Largest Domestic AI Computing Power Pool into Operation
On February 5th, the National Supercomputing Internet Core Node was launched for trial operation in Zhengzhou, deploying three sets of Sugon scaleX 10,000-card superclusters, becoming the country’s first to achieve a deployment of 30,000 cards and the largest domestically produced AI computing power pool in actual operation.
Currently, large artificial intelligence models are moving toward trillions of parameters, multimodal capabilities, and world models. On December 18, 2025, the scaleX 10,000-card supercluster made its debut at the HAIC conference with a real machine demonstration. Less than two months later, it was operational with a scale of over 30,000 cards, marking the official entry of domestically produced 10,000-card clusters into large-scale deployment and practical application.
Public information shows that, with the world’s first single-rack 640-card supernode scaleX640, the scaleX 10,000-card supercluster has increased single-rack computing density by 20 times. Compared to traditional solutions, it can achieve a 30% to 40% performance improvement in training and inference scenarios for MoE trillion-parameter large models. Additionally, the scaleX640 supernode has passed a 30-day+ long-term stability reliability test, ensuring the expansion and deployment of ultra-large-scale clusters with 100,000 cards.
The scaleX640, driven by “open architecture + system innovation,” was officially released on November 6, 2025. Its “one-to-two” high-density architecture design not only enables ultra-high-speed bus interconnection of 640 cards per rack, building a large-scale, high-bandwidth, low-latency supernode communication domain but also allows multiple scaleX640 supernodes to form a thousand-card-level computing unit.
Meanwhile, against the backdrop of adjustments in the global information technology industry ecosystem, the scaleX640 supernode adopts an “AI computing open architecture.”
This architecture was jointly released in September 2025 by Sugon and more than 20 industry chain companies, with multiple key technological capabilities open to the public. Its goal is to reduce the R&D barriers for AI clusters and avoid redundant investments. It is reported that the scaleX640 supernode supports multi-brand accelerators at the hardware level, is compatible with mainstream computing ecosystems at the software level, and supports frontier scenarios such as MoE trillion-parameter model training, high-throughput inference, and scientific intelligence (AI4S).
Sugon’s official WeChat account states that this computing power pool comprehensively covers large-scale AI computing scenarios such as trillion-parameter model training, high-throughput inference, and AI for Science in practical applications: for ultra-large-scale model training, it supports full-machine training and fault-tolerant recovery of trillion-parameter models; for high-throughput inference scenarios, it has served core intelligent services for several leading internet users and continuously improved inference efficiency through joint deep optimization; in the AI for Science field, it supports a domestic materials research large model that topped international authoritative rankings, helping top domestic research teams improve protein research efficiency by 3 to 6 orders of magnitude; it also pairs with the OneScience scientific large model one-stop development platform, significantly lowering the innovation threshold for interdisciplinary research.
Sugon Senior Vice President Li Bin stated that the wave of intelligence is reshaping the world with unprecedented strength. Sugon will take the launch of the core node as a starting point, continue to deepen technological R&D and application practices, and promote domestically produced intelligent computing power to serve various fields of economic and social development more efficiently, stably, and inclusively.
According to our reporter’s previous report, on January 18th, Li Jun, Deputy Chairman of Sugon, stated at the “Artificial Intelligence+” and Digital Economy Henan Entrepreneurs Tour event, hosted by Zhengzhou Municipal Government and co-organized by the Shanghai Securities News, Zhengzhou Municipal Bureau of Commerce, and Zhengzhou Office in Shanghai, that Sugon plans to establish an Advanced Computing Technology Research Institute in Zhengzhou, with a planned construction area of 160,000 square meters, initially gathering around 1,000 researchers, focusing on R&D innovation in AI and high-performance computing applications.
Risk Warning and Disclaimer
The market has risks, and investments should be cautious. This article does not constitute personal investment advice and does not consider individual users’ specific investment goals, financial situations, or needs. Users should consider whether any opinions, views, or conclusions in this article are suitable for their particular circumstances. Investment based on this information is at their own risk.
View Original
This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
Milestone in Domestic AI Computing Power: Sugon Deploys 3 Sets of ScaleX Ten-Thousand-Card Superclusters, Launching the Largest Domestic AI Computing Power Pool into Operation
On February 5th, the National Supercomputing Internet Core Node was launched for trial operation in Zhengzhou, deploying three sets of Sugon scaleX 10,000-card superclusters, becoming the country’s first to achieve a deployment of 30,000 cards and the largest domestically produced AI computing power pool in actual operation.
Currently, large artificial intelligence models are moving toward trillions of parameters, multimodal capabilities, and world models. On December 18, 2025, the scaleX 10,000-card supercluster made its debut at the HAIC conference with a real machine demonstration. Less than two months later, it was operational with a scale of over 30,000 cards, marking the official entry of domestically produced 10,000-card clusters into large-scale deployment and practical application.
Public information shows that, with the world’s first single-rack 640-card supernode scaleX640, the scaleX 10,000-card supercluster has increased single-rack computing density by 20 times. Compared to traditional solutions, it can achieve a 30% to 40% performance improvement in training and inference scenarios for MoE trillion-parameter large models. Additionally, the scaleX640 supernode has passed a 30-day+ long-term stability reliability test, ensuring the expansion and deployment of ultra-large-scale clusters with 100,000 cards.
The scaleX640, driven by “open architecture + system innovation,” was officially released on November 6, 2025. Its “one-to-two” high-density architecture design not only enables ultra-high-speed bus interconnection of 640 cards per rack, building a large-scale, high-bandwidth, low-latency supernode communication domain but also allows multiple scaleX640 supernodes to form a thousand-card-level computing unit.
Meanwhile, against the backdrop of adjustments in the global information technology industry ecosystem, the scaleX640 supernode adopts an “AI computing open architecture.”
This architecture was jointly released in September 2025 by Sugon and more than 20 industry chain companies, with multiple key technological capabilities open to the public. Its goal is to reduce the R&D barriers for AI clusters and avoid redundant investments. It is reported that the scaleX640 supernode supports multi-brand accelerators at the hardware level, is compatible with mainstream computing ecosystems at the software level, and supports frontier scenarios such as MoE trillion-parameter model training, high-throughput inference, and scientific intelligence (AI4S).
Sugon’s official WeChat account states that this computing power pool comprehensively covers large-scale AI computing scenarios such as trillion-parameter model training, high-throughput inference, and AI for Science in practical applications: for ultra-large-scale model training, it supports full-machine training and fault-tolerant recovery of trillion-parameter models; for high-throughput inference scenarios, it has served core intelligent services for several leading internet users and continuously improved inference efficiency through joint deep optimization; in the AI for Science field, it supports a domestic materials research large model that topped international authoritative rankings, helping top domestic research teams improve protein research efficiency by 3 to 6 orders of magnitude; it also pairs with the OneScience scientific large model one-stop development platform, significantly lowering the innovation threshold for interdisciplinary research.
Sugon Senior Vice President Li Bin stated that the wave of intelligence is reshaping the world with unprecedented strength. Sugon will take the launch of the core node as a starting point, continue to deepen technological R&D and application practices, and promote domestically produced intelligent computing power to serve various fields of economic and social development more efficiently, stably, and inclusively.
According to our reporter’s previous report, on January 18th, Li Jun, Deputy Chairman of Sugon, stated at the “Artificial Intelligence+” and Digital Economy Henan Entrepreneurs Tour event, hosted by Zhengzhou Municipal Government and co-organized by the Shanghai Securities News, Zhengzhou Municipal Bureau of Commerce, and Zhengzhou Office in Shanghai, that Sugon plans to establish an Advanced Computing Technology Research Institute in Zhengzhou, with a planned construction area of 160,000 square meters, initially gathering around 1,000 researchers, focusing on R&D innovation in AI and high-performance computing applications.
Risk Warning and Disclaimer
The market has risks, and investments should be cautious. This article does not constitute personal investment advice and does not consider individual users’ specific investment goals, financial situations, or needs. Users should consider whether any opinions, views, or conclusions in this article are suitable for their particular circumstances. Investment based on this information is at their own risk.