Leverage our innovations with new technologies to deploy intelligence and connectivity from core to cloud.
View All Solutions
View All Solutions
Accelerate workloads of different platform types and size with ASUS servers and workstations.
View All ProductsView All Products
Engage with us to find out our news, media, events and stories.
We offer a variety of ways to get support, including sales, products and technical issues.
Update : 2025/10/28 22:00:00
Washington, D.C., Oct. 28, 2025 — ASUS will showcase a flexible, end-to-end AI Factory at NVIDIA GTC Washington, D.C. (Walter E. Washington Convention Center; ASUS booth #738). Spanning extreme edge, deskside, and rack-scale deployments built on NVIDIA Blackwell architecture, the lineup is designed to fit a wide range of operational environments—from research labs and control rooms to on-prem data centers—with options that support disciplined lifecycle management and on-site deployment needs. Shipments of ASUS XA GB721-E2 built on the NVIDIA GB300 NVL72 platform and XA NB3I-E12 (based on NVIDIA HGX B300 system) began in September 2025.

The ASUS AI Factory combines hardware, optimized software stacks, and services to streamline AI adoption across generative AI, VLM/LLM inference, computer vision, and predictive analytics. Organizations can standardize on common NVIDIA frameworks while choosing the right footprint—edge appliances, deskside systems, or rack-scale PODs—to meet data locality, availability, and serviceability preferences. High-density nodes such as XA NB3I-E12 (leveraging the NVIDIA HGX B300 platform), includes 8 NVIDIA ConnectX-8 InfiniBand SuperNICs, 5 PCIe® expansion slots, 32 DIMM, and 10 NVMe, helping teams scale performance without sacrificing maintainability.

Rack-Scale & GPU Servers
ASUS XA GB721-E2 (Grace Blackwell Ultra system): A reference blueprint for multi-node training and large-scale inference using NVIDIA networking and software.
RS720A-E13-RS8G (2U GPU server): Versatile 2U platform for training and high-throughput inference with thoughtful service access.
ESC8000A-E13X (NVIDIA RTX PRO 6000 Blackwell Server Edition GPUs and NVIDIA ConnectX-8 SuperNIC): Multi-GPU acceleration for simulation, visualization, and AI pipelines with 400G-class networking options.

ASUS ASCENT GX10: Compact desktop AI supercomputer for developers, analysts delivering 1 petaFLOP of AI performance and 128GB memory for 200B model fine-tuning.
ExpertCenter Pro ET900N G3: The ASUS ExpertCenter Pro ET900N G3 is part of a new class of deskside AI supercomputers. Designed from the ground up to build and run AI, this revolutionary system unleashes the NVIDIA® GB300 Grace Blackwell Ultra Desktop Superchip and up to a massive 784GB of large coherent memory.
ASUS IoT PE3000N — Compact edge-AI platform built on NVIDIA Jetson Thor T5000, delivering up to 2,070 FP4 TFLOPS via a Blackwell-class GPU, a 14-core Arm® CPU, and 128GB LPDDR5X. Its scalable, modular I/O supports PoE, GMSL, CAN, QSFP28 and more, while 12–60V DC with ignition enables vehicle and field-robot power. PTP/PPS supports precise sensor time sync; TPM 2.0 and ruggedized connectors aid reliability. Options include up to 4×25GbE and 16 GMSL cameras for high-bandwidth sensor fusion—ideal for inspection, mobility, and smart-infrastructure use cases.
XA NB3I-E12 (B300 air-cooled): Balanced thermals and straightforward maintenance for mainstream deployments where uptime and accessibility matter.
Join the ASUS 2025 GTC Washington, D.C. Theater Talk
Don’t miss the 15-minute ASUS Infrastructure for Every Scale—from Edge to Trillion-Token AI session at Expo Hall Stage on October 29 from 4:00 PM - 4:15 PM EDT. During this insightful presentation we will share how ASUS helps customers build future-ready AI data centers. Learn how our servers, rack-scale ASUS AI PODs with NVIDIA GB200/GB300 NVL72, and high-serviceability designs address diverse AI workloads and deployment challenges.