Futures
Access hundreds of perpetual contracts
TradFi
Gold
One platform for global traditional assets
Options
Hot
Trade European-style vanilla options
Unified Account
Maximize your capital efficiency
Demo Trading
Introduction to Futures Trading
Learn the basics of futures trading
Futures Events
Join events to earn rewards
Demo Trading
Use virtual funds to practice risk-free trading
Launch
CandyDrop
Collect candies to earn airdrops
Launchpool
Quick staking, earn potential new tokens
HODLer Airdrop
Hold GT and get massive airdrops for free
Launchpad
Be early to the next big token project
Alpha Points
Trade on-chain assets and earn airdrops
Futures Points
Earn futures points and claim airdrop rewards
AMD(AMD.US)Embarking on the era of rack-scale AI infrastructure! A strong partnership with Tianhong Technology(CLS.US) to build the Helios computing cluster
AMD (AMD.US) has announced a significant deep collaboration with Celestica (CLS.US), aiming to launch its brand new Helios rack-scale AI computing infrastructure platform to compete with NVIDIA’s NVL72 rack-scale AI platform in the global AI data center market. For AMD, which is striving to capture a significant portion of NVIDIA’s (NVDA.US) 90% market share in the trillion-dollar AI core computing cluster space, Helios is crucial for AMD’s revenue and profit outlook. AMD is gradually shifting its competitive focus to comprehensive rack-level systems similar to NVIDIA’s NVL72, with the large-scale launch of the Helios AI computing cluster at the end of 2026, essentially putting it in direct competition with NVIDIA’s “rack-scale AI infrastructure.”
Both companies stated in a release that when the AI computing platform is launched, Celestica will be responsible for the research, design, and manufacturing of the vertically scalable high-performance network switches within the AMD Helios rack-scale AI computing cluster architecture.
AMD Helios is a complete set of high-performance, open-standard rack-scale AI infrastructure platforms designed for ultra-large AI training and inference tasks in AI data centers. Rack-scale AI architecture is currently the most popular clustered computing method, whereby the entire rack, rather than a single CPU/GPU server, serves as the foundational computing unit for massive AI workloads. It integrates core computing capabilities like AI GPUs/AI ASICs, high-performance network architecture, and liquid cooling units into a single AI computing infrastructure system to efficiently train large language models (LLMs) or handle massive AI workloads based on large models.
The two companies also noted that these vertically scalable switches will utilize the most advanced architecture network chips to achieve high-speed interconnect systems between the next-generation AMD Instinct MI450 series AI GPUs, providing cutting-edge computing capabilities optimized for large-scale AI computing infrastructure clusters.
“The Helios rack-scale AI solution represents a new blueprint for AI computing infrastructure, enabling customers to deploy AI data centers at scale with the performance, efficiency, and flexibility metrics required for next-generation, immensely large AI workloads,” said Forrest Norrod, Executive Vice President and General Manager of AMD’s Data Center Solutions Business Unit, in the statement.
The companies pointed out that they are collaborating to support the high-efficiency one-click deployment of Helios across cloud computing platforms, enterprise organizations, and large research environments. Thanks to the latest developments in their joint efforts to ramp up Helios production capacity, Celestica’s stock price rose about 3% by the close of U.S. markets on Monday, while AMD’s stock price surged over 3% at one point, ultimately closing up 1.7%.
It is reported that the AMD Helios rack-scale AI computing infrastructure is expected to begin bulk shipments to major cloud computing customers such as Microsoft and Amazon by the end of 2026.
A Strong Alliance Against the “NVIDIA Blackwell Series”
As AMD and Celestica accelerate the market launch of the Helios rack-scale AI platform, it coincides with AMD’s collaboration with several technology leaders to combat NVIDIA’s vertically integrated AI computing infrastructure solutions. Previously, AMD announced partnerships with Inspur and Broadcom, aiming to provide an open and rack-scale AI computing infrastructure for high-performance computing clusters and large AI data centers, while striving to accelerate global “Sovereign AI” research progress.
Inspur will become one of the first system suppliers to adopt the AMD “Helios” rack-scale AI computing cluster architecture, and AMD and Inspur will integrate a customized high-performance scale-up switch designed in deep collaboration with ASIC and high-performance network leader Broadcom. This large AI computing system aims to simplify the deployment of larger-scale AI computing infrastructure clusters and provide a more cost-effective and energy-efficient AMD rack-scale AI computing cluster solution beyond NVIDIA’s Blackwell series.
Essentially, Helios is AMD’s response to NVIDIA’s Blackwell NVL72/GB200 NVL72 roadmap for rack-scale AI infrastructure. Both systems are based on 72 GPUs + CPU + high-speed interconnect + liquid cooling + rack-scale system engineering as the fundamental units for AI workloads, rather than treating individual servers as core products. AMD officially defines Helios as an open rack architecture based on OCP Open Rack Wide, aimed at large-scale training and inference; NVIDIA defines the GB200 NVL72 as a liquid-cooled rack-scale platform consisting of 36 Grace CPUs + 72 Blackwell GPUs. In other words, Helios is not just “another batch of MI450 GPUs,” but AMD’s first genuine attempt to go head-to-head with NVIDIA’s NVL72 architecture using a complete rack system.
Compared to AMD’s previous AI GPU series, the performance leap of Helios is considerable. AMD’s official benchmarks state that Helios can achieve up to a 36-fold performance improvement over the previous generation AMD AI computing platform, indicating that AMD’s approach to AI computing infrastructure has shifted from “selling more powerful GPU cards” to “selling complete AI factories,” packaging GPU, CPU, NIC, liquid cooling, network topology, and ROCm as a comprehensive AI computing solution for sale.
The main selling point of Helios is its memory and open interconnect. AMD’s official statements indicate that the 72-GPU Helios can provide up to 2.9 exaFLOPS FP4, 1.4 exaFLOPS FP8, 31TB HBM4, 1.4PB/s aggregate memory bandwidth, and 260TB/s scale-up interconnect bandwidth; compared to NVIDIA’s GB200 NVL72, Helios is significantly more aggressive in terms of memory capacity, raw scale-up bandwidth, and open rack design, making it more appealing for long-context, large-parameter models and bandwidth-sensitive training/inference systems; AMD even publicly claims that Helios has 50% more memory capacity than NVIDIA’s next-generation computing platform—the Vera Rubin system.
Why Did AMD Choose Celestica?
The reason AMD needs to partner with Celestica is quite practical: the bottlenecks of rack-scale AI systems are no longer limited to GPUs but extend to high-speed scale-up switches, liquid cooling engineering, manufacturing yield, delivery capabilities, and supply chain resilience.
AMD clearly stated in a release that Celestica is responsible for the research, design, and manufacturing of Helios scale-up networking switches, which are based on UALoE, directly determining whether the MI450 cluster can run stably in large-scale AI clusters. Celestica’s value lies not in “manufacturing an ordinary component,” but in helping AMD bridge the most challenging transition from a chip company to a systems company—engineering and productizing the open computing cluster architecture and delivering it according to hyperscaler requirements. Essentially, this aligns with NVIDIA’s increasing emphasis on cabinet systems, high-performance networks, and operational software extreme synergy in recent years.
Celestica is a Canadian-based electronic manufacturing services (EMS/ODM) and infrastructure solutions provider that not only engages in traditional hardware assembly but also plays a critical role in the design, manufacturing, and integration of AI data center infrastructure-related products (such as network switches, servers, rack-scale solutions, high-bandwidth network components, etc.); as cloud service providers and large tech companies (like Google, Meta, Amazon, etc.) massively build AI data centers, the demand for high-speed network connections, custom hardware, and rack-scale integrated solutions has surged significantly. Celestica is the core supplier of high-performance network switches, servers, ASIC/TPU-related hardware modules, and integrated services needed by these large data centers. The company’s stock price rose by as much as 220% throughout 2025.