Built for the Era of AI Reasoning​

ASUS AI POD with NVIDIA GB300 NVL72

ASUS AI POD with NVIDIA GB300 NVL72 combines 72 Blackwell Ultra GPUs and 36 Grace™ CPUs in a liquid-cooled, rack-scale platform designed to scale AI factories to unprecedented levels. Purpose-built for test-time scaling and advanced AI reasoning, it delivers the compute density and efficiency needed to power next-generation generative AI, LLMs, and scientific breakthroughs.​
Data center server racks with headline ‘Built for the Era of AI Reasoning,’ highlighting ASUS and NVIDIA AI infrastructure solutions

The NVIDIA Blackwell Ultra GPU Breakthrough

NVIDIA GB300 Grace Blackwell Ultra Superchips​ packs 208-billion transistors across dual reticle-limited dies connected by a 10TB/s interconnect. With 288GB of HBM3e and 1.5X more AI compute than NVIDIA Blackwell GPU, it enables larger context windows, faster inference, and breakthrough scalability – delivering up to 50X the inference performance of NVIDIA Hopper platform for next-generation AI models.​
XA GB721-E2 offers seamless GPU-to-GPU communication and accelerated data movement to builds a unified high-performance fabric. Together, these technologies provide fast, efficient, and reliable connectivity – unlocking scalable performance for AI supercomputing infrastructure.​
Close-up of GPU component showing advanced compute power for AI workloads Close-up of GPU component showing advanced compute power for AI workloads

Fifth-generation NVIDIA NVLink™​

Powers seamless GPU-to-GPU communication, ​unlocking new levels of AI reasoning performance​.
Data center rack filled with ASUS servers optimized for AI training and inference.

NVIDIA Quantum-X800 InfiniBand /​
NVIDIA Spectrum-X™ Ethernet​

Paired with NVIDIA ConnectX®-8 SuperNIC​™​ to deliver delivers best-in-class RDMA enabling peak AI workload efficiency.​
NVIDIA NVLink switch and networking technology for accelerated data throughput. NVIDIA NVLink switch and networking technology for accelerated data throughput.

NVIDIA® BlueField®-3​

Transforms infrastructure with cloud networking and agile composable storage​.

Total Solution to Speed Time to Market​

Beside comprehensive hardware, ASUS delivers a complete suite of software, storage, networking, cooling, and management services regarding ASUS AI POD. This all-in-one approach streamlines operations, helps with fast deployment, and empowers organizations to accelerate AI innovation with time to market.​

Storage Solutions Optimized for Workloads

ASUS AI POD integrates all-flash for high performance, cost-effective hybrid for scalable capacity, and enterprise-grade resilience from edge to cloud, optimized for HPC and AI data-driven workloads , and enterprise storage solutions suitable for various applications and platforms.​
  • High-capacity storage server optimized for AI training workloads

    All Flash
    Hot Tier Storage

    AI / HPC

    RS501A-E12-RS12U​

    Learn more​
  • Balanced storage system designed for HPC and inference tasks.

    Hybrid
    Cold Tier Storage

    AI / HPC

    OJ340A-RS60​

    Learn more​
  • Scalable storage solution supporting multi-rack AI deployments

    Flash & Hybrid
    Unified Storage

    Enterprise Virtualization , Data Base

    VS320D-RS26U + JBOD

    Learn more​
  • Compact storage system for diverse enterprise AI applications.

    Hybrid
    Unified Storage

    Enterprise Data Backup , Surveillance

    VS320D-RS12U + JBOD

    Learn more​

Maximize efficiency, minimize heat

Liquid-cooling architectures

NVIDIA GB300 Grace Blackwell Ultra Superchip packs 208-billion transistors across dual reticle-limited dies connected by a 10TB/s interconnect. With 288GB of HBM3e and 1.5X more AI compute than NVIDIA Blackwell GPU, it enables larger context windows, faster inference, and breakthrough scalability. NVIDIA GB300 NVL72 delivers up to 50X the inference performance of NVIDIA Hopper platform for next-generation AI models.​
  • Cutaway diagram of server cooling system maximizing efficiency and minimizing heat

    Liquid-to-air solutions

    Ideal for small-scale data centers with compact facilities.
    Designed to meet the needs of existing air-cooled data centers and easily integrate with current infrastructure.
    Perfect for enterprises seeking immediate implementation and deployment.

  • ASUS rack server with optimized airflow and advanced cooling design

    Liquid-to-liquid solutions

    Ideal for large-scale, extensive infrastructure with high workloads.
    Provides long-term, low PUE with sustained energy efficiency over time.
    Reduces TCO for maximum value and cost-effective operations.

Validated Topologies
for Scalable AI Infrastructure​

ASUS AI POD with NVIDIA GB300 NVL72 adopts validated reference architectures to streamline network topology deployment, ensuring predictable performance and simplified scaling.
More Service right arrow
  • Icon of a bar chart with an upward arrow and gear, representing predictive performance

    Predictable Performance​

    Assured bandwidth and low latency for demanding AI workloads​

  • Icon of connected nodes in a network, representing simplified integration

    Simplified Scaling​

    Validated designs ensure smooth growth from rack to cluster​

  • Icon of a gear connected to hierarchical lines, representing deployment and scalability

    Deployment Efficiency​

    Reference architectures accelerate setup and reduce complexity​

Engineer working on server racks to configure AI infrastructure systems.

Accelerate your time to market

ASUS self-owned software and controller

The NVIDIA NVLink Switch features 144 ports with a switching capacity of 14.4 TB/s, allowing nine switches to interconnect with the NVLink ports on each of the 72 NVIDIA Blackwell Ultra GPUs​ within a single NVLink domain.
  • ASUS Control Center (ACC) software interface for server monitoring and AI management

    ASUS Control Center (ACC)

    Centralized IT-management software for monitoring and controlling ASUS servers

    • Power Master – Effective energy control for data centers
    • Easy search and control of your devices
    • Enhance information security easily and quickly
  • What is the ASUS AI POD with NVIDIA GB300 NVL72?​
    What is the ASUS AI POD with NVIDIA GB300 NVL72?​ The ASUS AI POD with NVIDIA GB300 NVL72 is a rack-scale AI infrastructure solution that integrates 72 NVIDIA Blackwell Ultra GPUs and 36 NVIDIA Grace CPUs to power large-scale LLM inference, training, and AI reasoning.