-
Customer Name: National Center for High-performance Computing, National Institutes of Applied Research (NCHC, NIAR)
-
Location: Taiwan
-
Industry: Supercomputer, HPC

Project Background
As generative AI and big data analytics redefine the global technological landscape, the National Center for High-performance Computing, National Institutes of Applied Research (NCHC, NIAR) sought to significantly bolster Taiwan’s sovereign AI capabilities. The objective was to build a next-generation AI supercomputer capable of handling complex large language models (LLMs), deep learning, and advanced HPC workloads.
To achieve this, NCHC required a breakthrough dual-compute architecture that could deliver world-class performance while addressing the immense thermal challenges associated with next-generation GPUs. The project aimed not only for raw speed but also for a sustainable, energy-efficient design that could serve as a benchmark for future AI infrastructure in both academia and industry.
The Challenge
As a strategic pillar of Taiwan’s AI autonomy, the Nano4 project presented formidable challenges. Building this national infrastructure from scratch required ASUS to manage complex partner integrations and sophisticated technologies under a highly compressed timeline.
Furthermore, supporting critical research—from climate forecasting to engineering simulations—meant adhering to rigorous national standards, placing immense pressure on technical expertise and large-scale coordination.
Our Solution
To address NCHC’s requirements for extreme performance and energy efficiency, we developed a comprehensive, tailor-made AI supercomputing solution:
-
Cutting-Edge Architecture Integration: ASUS demonstrated the technical capability to deploy the NVIDIA GB200 NVL72 alongside the HGXTM H200 cluster, providing a flexible and highly scalable dual-compute environment.
-
Thermal Management Leadership: ASUS’s proficiency in Direct-to-Chip (D2C) liquid cooling was crucial for managing the extreme heat density of the NVIDIA Blackwell and HGXTM H200 systems.
-
Rapid & Reliable Deployment from 3 weeks to just 3 days: Leveraging the ASUS Infrastructure Deployment Center (AIDC), ASUS automated the setup process, slashing deployment time from 3 weeks to just 3 days and accelerating time-to-market for critical research resources.
-
Unified and Scalable Infrastructure: ASUS provided a holistic ecosystem, integrating high-reliability OJ340A-RS60 and RS501A-E12-RS12U storage servers with professional validation of network and cooling systems to ensure a stable, production-ready environment for generative AI.
Why ASUS
NCHC partnered with ASUS due to its decades of expertise in server innovation and its proven ability to deliver complex, end-to-end AI infrastructure. Key factors in selecting ASUS included:
-
Robust Hardware & Software Integration: ASUS goes beyond providing hardware; our deep expertise in software optimization ensures that GPU computing power is fully unleashed.
-
Proven National-Level Experience: Having successfully built the Forerunner 1 and TAIWANIA 2 supercomputers for NCHC, ASUS has a track record of delivering high-standard infrastructure that meets rigorous national security and validation requirements.
-
Deep Strategic Partnerships: As a global core partner of NVIDIA, ASUS provides early access to and seamless integration of the most cutting-edge AI architectures.
"Along with AI-infrastructure expertise from ASUS, this newly built system feeds advanced computes for generative AI, big data, deep learning, and HPC, unlocking unprecedented computational capabilities for both academia and industry." — Dr. Chia-Lee Yang, NCHC Spokesperson
Key Achievement: Ranking No. 29 on the Global TOP500 and 1.18 PUE Efficiency
The operational system, featuring the Nano4 NVIDIA HGXTM H200 cluster, delivers a staggering 81.55 PFLOPS of performance. This achievement secured the No. 29 spot on the TOP500 list, officially establishing it as one of the world's most powerful AI supercomputers, and the fastest, highest-density AI supercomputing facility in Taiwan.
By implementing ASUS’s advanced Direct Liquid Cooling (DLC) technology from the ground up, the facility achieved a Power Usage Effectiveness (PUE) of just 1.18. This demonstrates that high-performance AI computing can coexist with sustainable, energy-efficient design.