Next-Level 8-GPU Server for Enterprise / CSP heavy AI workloads

NVIDIA HGX B300 8-GPU server

Equipped with NVIDIA Blackwell HGX B300 8-GPU and dual 6th Gen Intel Xeon® Scalable processors, ASUS XA NB3I-E12 is designed for heavy AI workloads with 8 embedded CX8 InfiniBand on GPU, 5 expansion PCIe slots, 32 DIMM, 10 NVMe, and dual 10Gb LAN. It transforms data into intelligence for efficient real-world automation and is ideal for large enterprises and cloud service providers running large language models (LLMs), research institutions and universities performing scientific computing, as well as financial and automotive sectors focused on AI model training and inference.

Unmatched End-to-End Accelerated Computing Platform

The NVIDIA HGX B300 integrates NVIDIA Blackwell Ultra GPUs with high-speed interconnects to propel the data center into a new era of accelerated computing and generative AI. As a premier accelerated scale-up platform, NVIDIA Blackwell-based HGX systems are designed for the most demanding generative AI, data analytics, and HPC workloads.
Bar chart comparing model training speed of NVIDIA GPU platforms. HGX H100 baseline, HGX B200 higher, HGX B300 highest, indicating up to 1.7 times faster AI training with HGX B300.
Bar chart comparing real-time throughput performance of different NVIDIA GPU platforms. HGX H100 baseline, HGX B200 higher, HGX B300 highest, showing significant increase in AI inference performance.
HGX B300 delivers next-level AI performance, achieving up to 11X higher inference performance compared to the previous NVIDIA Hopper™ generation on models such as Llama 3.1 405B. Powered by the second-generation Transformer Engine with custom Blackwell Tensor Core technology and TensorRT™-LLM optimizations, it accelerates inference for large language models while enabling up to 4X faster training with 8-bit floating point (FP8) and new precisions.
This breakthrough is further enhanced by fifth-generation NVLink with 1.8TB/s GPU-to-GPU interconnect, InfiniBand networking, and NVIDIA Magnum IO™ software, ensuring efficient scalability for enterprises and large-scale GPU computing clusters.
3D system layout diagram of the ASUS HGX B300 8-GPU server. Labels identify NVIDIA Blackwell Ultra B200 GPUs, dual Intel Xeon Scalable processors, DDR5 DIMM memory up to 4TB, five PCIe Gen 5 x16 expansion slots, up to nine 2.5-inch NVMe/SATA drives, and redundant 3200W 80 Plus Titanium power supplies. 3D system layout diagram of the ASUS HGX B300 8-GPU server. Labels identify NVIDIA Blackwell Ultra B200 GPUs, dual Intel Xeon Scalable processors, DDR5 DIMM memory up to 4TB, five PCIe Gen 5 x16 expansion slots, up to nine 2.5-inch NVMe/SATA drives, and redundant 3200W 80 Plus Titanium power supplies.

GPU
HGX BLACKWELL ULTRA B300

Processor
2 × 6th Gen Intel® Xeon® Scalable Processors (SP) TDP 350W

Memory
32 × 6400 DDR5 DIMM slots (Max 4TB)

Expansion
5 × Gen5 PCIe slots (4 × 16 + 1 × 8)

Storage
10 × 2.5" bays (10 × NVMe)

Power Supply
5+5 redundant 3200W 80 PLUS Titanium PSU

Rear panel view of a server showing 15 GPU fans (54V, 8080) and 10 power supply units (54V, 3200W). Rear panel view of a server showing 15 GPU fans (54V, 8080) and 10 power supply units (54V, 3200W).

GPU Fan x15 (54V, 8080)

PSU x10 (54V, 3200W)

Front panel layout of a server system showing labeled components including OCP connectors, RAID ports, multiple PCIe slots, storage bays (12 x 3.5”), IPMI, and BlueField-2 card. Front panel layout of a server system showing labeled components including OCP connectors, RAID ports, multiple PCIe slots, storage bays (12 x 3.5”), IPMI, and BlueField-2 card.

HGX B300

OSFP Connector x8

RAID (X8)

PCIe Slot (x16)

BlueField-3

Storage U.2 x10

FPB

IO Panel

Modular Design with Reduced Cable Usage

Easy troubleshooting and thermal optimization

The modular design minimizes cable usage, speeding up system assembly and simplifying routing, while improving thermal performance by reducing airflow obstruction. Board-to-board connections further lower cable loss and latency, making maintenance easier and ensuring high serviceability.

Advanced NVIDIA Technologies

The full power of NVIDIA GPUs, DPUs, NVLink, NVSwitch, and networking

XA NB3I-E12 accelerates the development of AI and data science by incorporating fourth generation NVLink and NVSwitch technology, as well as NVIDIA ConnectX-8 SmartNIC. This also empowers GPUDirect® RDMA, storage with NVIDIA Magnum IO, and NVIDIA AI Enterprise – the software layer of the NVIDIA AI platform.
Topology diagram comparing PCIe switch-based and NVSwitch-based server designs. The HGX B200 architecture uses NVSwitch for full GPU-to-GPU bandwidth with fewer latency bottlenecks, shown with color-coded connection lines and arrows.

Optimized Thermal Design

A two-level GPU and CPU sled for thermal efficiency

XA NB3I-E12 features a streamlined design with dedicated CPU and GPU airflow tunnels for efficient heat dissipation. The two-level GPU and CPU sled design enhances thermal efficiency, scalability, and overall performance by allowing heat to be expelled into the surrounding ambient air – resulting in greater energy efficiency and overall system power savings.
This is server layout diagrams illustrating optimized thermal design of ASUS HGX B300 system. Left image shows a sled with two levels separating GPU and CPU for heat dissipation.
This is server layout diagrams illustrating optimized thermal design of ASUS HGX B300 system. Right image shows the complete server chassis with GPUs installed, designed for greater energy efficiency and system power savings.

5+5 Power Supplies

Exceptional power efficiency

To minimize operating costs, XA NB3I-E12 provides effective cooling and innovative components. Exceptional power efficiency is achieved with 5+5 80 PLUS Titanium power supplies, delivering reliable and abundant power.
5+5 redundant power supply modules in server, ensuring high efficiency and failover capability.

Serviceability

Improved IT operations efficiency

  • Ergonomic handle design

  • Tool-free thumbscrews

  • Riser-card catch

  • Tool-free cover

BMC

Remote server management

ASUS ASMB11-iKVM is the latest server-management solution from ASUS, built upon the ASPEED 2600 chipset running on the latest AMI MegaRAC SP-X. The module provides various interfaces to enable out-of-band server management through WebGUI, Intelligent Platform Management Interface (IPMI) and Redfish® API.
Learn more
Bar chart comparing BMC boot time: AST2600 at 12.885 seconds versus AST2500 at 21.262 seconds, showing

IT infrastructure-management software

Streamline IT operations via a single dashboard

ASUS Control Center (ACC) is remote IT-management software application designed for monitoring hardware and software IT assets and inventory status, and enabling seamless remote BIOS configurations and updates, efficient IT diagnostics and troubleshooting and enhanced security with hotfix updates — allowing easier server management for any IT infrastructure.
Learn more
IT infrastructure management software dashboard displaying performance, alerts, and resource metrics.

Hardware Root-of-Trust Solution

Detect, recover, boot and protect

ASUS servers integrate PFR FPGA as the platform Root-of-Trust solution for firmware resiliency to prevent from hackers from gaining access to infrastructure. ASUS security solutions are fully compliant with the 2018 National Institute of Standards and Technology (NIST) SP 800 193 specification.

* Platform Firmware Resilience (PFR) module must be specified at time of purchase and is factory-fitted. It is not for sale separately.

Trusted Platform Module 2.0

ASUS servers also include support Trusted Platform Module 2.0 (TPM 2.0) to secure hardware through integrated cryptographic keys and offer regular firmware update for vulnerabilities.
Learn More about the product support list
Intel Xeon processor Logo

Intel Xeon 6 processors

Intel Xeon 6 processors represent a significant leap forward in performance and efficiency, specifically designed to meet the demands of advanced AI, networking and data center workloads. Featuring an all-P-core architecture, these processors deliver exceptional computational power, enabling enterprises to tackle the most demanding applications with ease. With support for high-speed DDR5 memory and PCIe 5.0 I/O, the Intel Xeon 6 processors ensure seamless scalability and enhanced data throughput, making them ideal for modern high-performance environments. Additionally, their optimized energy efficiency and robust design cater to the evolving needs of next-generation data centers, solidifying Intel's leadership in AI and networking solutions.
  • What AI workloads are the XA NB3I-E12 for?
    What AI workloads are the XA NB3I-E12 for? XA NB3I-E12 with NVIDIA Blackwell HGX B300 8-GPU and dual Intel Xeon 6 processors handles LLMs, AI training, and scientific computing with 8 embedded CX8 InfiniBand for ultra-low latency.