The Foundational Building Block for Enterprise AI Factories

NVIDIA Blackwell HGX™ B200 8-GPU AI Server

Equipped with NVIDIA Blackwell HGX™ B200 8-GPU and powered by dual 5th Gen Intel Xeon® Scalable processors, ASUS ESC-NB8-E11 features direct GPU-to-GPU interconnect via NVIDIA NVLink™ with 1,800GB/s bandwidth for optimized scaling. It employs a dedicated one-GPU-to-one-NIC topology, supporting up to eight NICs for maximum throughput in compute-intensive workloads. Designed as a foundational building block for enterprise AI factories, ESC-NB8-E11 accelerates computing, networking, storage, and software integration — enabling faster, more reliable AI factory deployments with reduced risk and improved operational efficiency.

Propelling the Data Center Into a New Era of Accelerated Computing

NVIDIA Blackwell HGX™ B200

Engineered for the most demanding AI, data analytics, and high-performance computing (HPC) workloads, NVIDIA HGX B200 transforms data centers with next-generation accelerated computing and generative AI capabilities. Featuring powerful NVIDIA Blackwell GPUs and ultra-fast interconnects, it delivers up to 1.4TB of HBM3E memory for exceptional data throughput, and 1800GB/s of NVLink bandwidth via NVSwitch™ for seamless GPU communication.
In large-scale model workloads such as GPT-MoE-1.8T, the NVIDIA B200 NVL8 delivers a major leap in performance:
Bar chart comparing training throughput in tokens per second. When training with 32,000 GPUs, NVIDIA B200 NVL8 offers up to 1.8x-faster performance compared to NVIDIA H200 NVL8.

Training at Scale

When training with 32,000 GPUs, B200 NVL8 offers up to 1.8x-faster performance compared to H200 NVL8.
Bar chart comparing inference throughput in tokens per second. For inference using 32 GPUs, NVIDIA B200 NVL8 achieves an impressive 5x performance boost over NVIDIA H200 NVL8.

Inference at Scale

For inference using 32 GPUs, B200 NVL8 achieves an impressive 5x performance boost over H200 NVL8.
3D exploded view of the NVIDIA HGX B200 8-GPU AI server showing key components including GPUs, CPUs, NICs, and power supply, labeled to indicate system layout and airflow 3D exploded view of the NVIDIA HGX B200 8-GPU AI server showing key components including GPUs, CPUs, NICs, and power supply, labeled to indicate system layout and airflow

1,800 GB/s bandwidth
Direct GPU-to-GPU interconnect via NVLink

PCIe 5.0 switch board to deliver faster
connection between storage, graphics and NIC

10 NVMe storage
- 8 on front panel
- 2 on rear panel

Modular and toolless design
with sled design and handles

Independent airflow tunnel design
With dual rotor fan modules

Front view of the NVIDIA Blackwell HGX B200 server showing system layout including 8 GPUs, NIC slots, and labeled airflow direction, illustrating hardware accessibility and cooling path. Front view of the NVIDIA Blackwell HGX B200 server showing system layout including 8 GPUs, NIC slots, and labeled airflow direction, illustrating hardware accessibility and cooling path.

GPU Sled Fan 8080 x 15pcs

NVMe x 8

Rear view of a dual-node server system showing labeled components, including two server nodes (Node 1 and Node 2), OCP 3.0 slots, USB 3.0 ports, 1-Gigabit Ethernet, 10-Gigabit Ethernet, and power supply units. Navigation options on the left include System Layout, Panel Layout, and Block Diagram. Rear view of a dual-node server system showing labeled components, including two server nodes (Node 1 and Node 2), OCP 3.0 slots, USB 3.0 ports, 1-Gigabit Ethernet, 10-Gigabit Ethernet, and power supply units. Navigation options on the left include System Layout, Panel Layout, and Block Diagram.

NVMe x 2

1 x PCIE Gen4 x 8

8 x PCIe Gen5 x16 (LP)

2 x PCIE Gen5 x 16

PSU x 6

Modular Design with Reduced Cable Usage

Easy troubleshooting and thermal optimization

The modular design greatly reduces the amount of cables used, resulting in a shortened system assembly time and avoiding cable routing, as well as improving thermal optimization by lowering the risk of choked airflow.
Close-up image of the server’s modular board design showing minimized cable use through direct GPU interconnects and optimized thermal routing, enhancing maintainability and airflow efficiency.

Advanced NVIDIA Technologies

The full power of NVIDIA GPUs, DPUs, NVLink, NVSwitch, and networking

ESC NB8-E11 accelerates the development of AI and data science by incorporating fourth generation NVLink and NVSwitch technology, as well as NVIDIA ConnectX-7 SmartNIC. This also empowers GPUDirect® RDMA, storage with NVIDIA Magnum IO™, and NVIDIA AI Enterprise – the software layer of the NVIDIA AI platform.
Topology diagram comparing PCIe switch-based and NVSwitch-based server designs. The HGX B200 architecture uses NVSwitch for full GPU-to-GPU bandwidth with fewer latency bottlenecks, shown with color-coded connection lines and arrows.

Optimized Thermal Design

A two-level GPU and CPU sled for thermal efficiency

ESC NB8-E11 features a streamlined design with dedicated CPU and GPU airflow tunnels for efficient heat dissipation. The two-level GPU and CPU sled design enhances thermal efficiency, scalability, and overall performance by allowing heat to be expelled into the surrounding ambient air – resulting in greater energy efficiency and overall system power savings.
The animation showing optimized thermal design. Top view of a dual-GPU and CPU server module with red heatsinks, highlighting the hot air exhaust path for optimized thermal flow from front to rear.
The animation showing optimized thermal design. Top view of a dual-GPU and CPU server module with blue heatsinks, showing the cool air intake zones to improve airflow and cooling efficiency.

5+1 Power Supplies

High level of power efficiency

To reduce operating costs, ESC NB8-E11 provides effective cooling and innovative components. The 80 PLUS® Titanium power supplies come in a redundant 5+1 arrangement to efficiently deliver an abundance of power.
5+1 redundant power supply modules in server, ensuring high efficiency and failover capability.

Serviceability

Improved IT operations efficiency

  • A photo of server with ergonomic handle design for easy removal and maintenance.

    Ergonomic handle design

  • A photo of hands removing thumbscrews without tools for fast and simple access to components.

    Tool-free thumbscrews

  • A photo of quick-release catch shown being pulled for secure and fast component access.

    Riser-card catch

  • A photo of user removing server top cover easily with tool-free design.

    Tool-free cover

BMC

Remote server management

ASUS ASMB11-iKVM is the latest server-management solution from ASUS, built upon the ASPEED 2600 chipset running on the latest AMI MegaRAC SP-X. The module provides various interfaces to enable out-of-band server management through WebGUI, Intelligent Platform Management Interface (IPMI) and Redfish® API.
Learn more
ASUS BMC boot time comparison

IT infrastructure-management software

Streamline IT operations via a single dashboard

ASUS Control Center (ACC) is remote IT-management software application designed for monitoring hardware and software IT assets and inventory status, and enabling seamless remote BIOS configurations and updates, efficient IT diagnostics and troubleshooting and enhanced security with hotfix updates — allowing easier server management for any IT infrastructure.
Learn more
IT infrastructure management software dashboard displaying performance, alerts, and resource metrics.

Hardware Root-of-Trust Solution

Detect, recover, boot and protect

ASUS servers integrate PFR FPGA as the platform Root-of-Trust solution for firmware resiliency to prevent from hackers from gaining access to infrastructure. ASUS security solutions are fully compliant with the 2018 National Institute of Standards and Technology (NIST) SP 800 193 specification.

* Platform Firmware Resilience (PFR) module must be specified at time of purchase and is factory-fitted. It is not for sale separately.

Trusted Platform Module 2.0

ASUS servers also include support Trusted Platform Module 2.0 (TPM 2.0) to secure hardware through integrated cryptographic keys and offer regular firmware update for vulnerabilities.
Learn More about the product support list

Performance

5th Gen Intel® Xeon® Scalable Processors

5th Gen Intel® Xeon® processors unleashes up to 21%-average general-purpose performance gain, significantly improving AI inference and training, all via the drop-in-compatible LGA-4677 socket. This innovation powerhouse accelerates AI, HPC, analytics, networking and storage, and offer eight DDR5-5600 memory channels for up to 2TB capacity, 80 PCI Express® 5.0 lanes with CXL 1.1 support, and a 350 W TDP for 1P and 2P configurations — ready for the future of computing.
Learn more Intel Xeon processor surrounded by six built-in accelerators including AMX, DSA, QAT, IAA, DLB, and DSA for workload-specific performance improvements.

* Availability of accelerators varies depending on SKU.