Leverage our innovations with new technologies to deploy intelligence and connectivity from core to cloud.
View All Solutions
View All Solutions
Accelerate workloads of different platform types and size with ASUS servers and workstations.
View All ProductsView All Products
Engage with us to find out our news, media, events and stories.
We offer a variety of ways to get support, including sales, products and technical issues.
Update : 2025/11/27 10:22:19

TAIPEI, Taiwan, November 27, 2025 — ASUS today announced that, in collaboration with the National Center for High-performance Computing (NCHC) to build the new AI supercomputer, the facility is now officially in operation. It features a dual-compute architecture that includes the Nano4 NVIDIA® HGX H200 cluster and the latest NVIDIA GB200 NVL72 system — Taiwan’s first fully liquid-cooled AI supercomputer deployment of this architecture.
The Nano4 NVIDIA HGX H200 system delivers up to 81.55 PFLOPS of performance and is ranked No. 29 on the TOP500 list, marking a meaningful milestone in strengthening Taiwan’s AI-computing capabilities and accelerating its intelligent transformation.
NCHC spokesperson Dr. Chia-Lee Yang said: “As the designer of the Nano4, NCHC drove the architecture, liquid-cooling strategy, and system integration. Along with AI-infrastructure expertise from ASUS, this newly built system feeds advanced computes for generative AI, big data, deep learning, and HPC, unlocking unprecedented computational capabilities for both academia and industry.”
This AI supercomputer utilizes two compute systems. The first is built around the ASUS-designed ESC NM2N721-E1 system, featuring the NVIDIA GB200 NVL72 architecture, with each rack integrating 36 NVIDIA Grace CPUs and 72 NVIDIA Blackwell GPUs. The second incorporates high-performance ASUS servers to enhance flexibility and scalability, including ESC N8-E11V servers with NVIDIA HGX H200 and Direct-to-Chip (D2C) liquid cooling, as well as ESC8000-E12 servers accelerated by NVIDIA MGX H200 GPUs with NVIDIA NVLink Bridge to support demanding AI workloads.
Built by ASUS from the ground up, the AI supercomputer implements direct liquid-cooling (DLC) technology, which achieves a power usage effectiveness (PUE) of just 1.18, combining high performance, energy efficiency, and sustainable design, demonstrated ASUS’s leadership in thermal innovation and energy management.
Reliable, scalable storage is essential for AI supercomputers to handle heavy AI workloads. ASUS leverages OJ340A-RS60 and RS501A-E12-RS12U storage servers, supporting file, object, and block storage with all-flash, tiered, and backup configurations for AI and HPC applications.
To speed deployment, ASUS implemented ASUS Infrastructure Deployment Center (AIDC), an automation solution that simplifies setup and management of HPC clusters, reducing deployment from three weeks to three days while ensuring performance, scalability, and reliability. The company also provides ASUS Professional Services covering validation, integration, and optimization of storage, liquid cooling, and network systems, enabling organizations to scale AI infrastructure efficiently, sustainably, and reliably.
With decades of server experience, ASUS delivers end-to-end AI infrastructure across servers, storage, edge, and software. With deep expertise, flexible customization, and proven deployment, ASUS enables enterprises to scale AI efficiently, holding over 1,900+ SPEC CPU and 230+ MLPerf world records.