NVIDIA HGX H100 SXM5 8‑GPU Board (935‑24287‑0301‑000) Board - New

NVIDIA

  • $169,000.00
    Unit price per 
Shipping calculated at checkout.


NVIDIA HGX H100 SXM5 8‑GPU Board (935‑24287‑0301‑000)

Enterprise-Grade 8‑Way H100 SXM5 AI Accelerator Platform


The NVIDIA HGX H100 SXM5 8‑GPU Board (part number 935‑24287‑0301‑000) integrates eight NVIDIA Hopper‑based H100 SXM5 GPUs, each with 80 GB HBM3 memory, into a single high-density accelerator board. Built for AI training, inference, and exascale HPC, it supports direct liquid cooling. With NVLink at 900 GB/s interconnect and 3.35 TB/s memory bandwidth, this platform delivers unmatched scalability and performance.


🔧 Product Specifications for NVIDIA HGX H100 SXM5 8‑GPU Board (935‑24287‑0301‑000)

Feature Details
Part Name / Number NVIDIA HGX H100 SXM5 8‑GPU Board – 935‑24287‑0301‑000
Form Factor SXM5 (8 GPUs on single board, direct liquid-cooled)
GPUs 8 × NVIDIA H100 SXM5, each 80 GB HBM3
Total GPU Memory 640 GB HBM3
Memory Bandwidth 3.35 TB/s per GPU via HBM3, NVLink at 900 GB/s per NVLink
CUDA Cores (Total) 8 × 16,896 = 135,168 cores
Tensor Cores (Total) 8 × 528 = 4,224 cores
GPU Interconnect NVLink SXM5 — 900 GB/s GPU ↔ GPU
Form Thermal Design Power Configurable up to 700 W per GPU
Cooling Direct liquid cooling (equipment not included)
Power Interface Board-level power connectors (server-integrated power delivery)
PCIe Host Interface PCIe Gen 5 x16 interface (board to host)
Multi‑Instance GPU (MIG) Each H100 supports up to 7 MIG instances
Use Cases Distributed AI training, supercomputing, HPC clusters
Certifications Hopper microarchitecture, NVLink-enabled, NVIDIA HGX/DGX compliant

❓ Frequently Asked Questions (FAQs)

Q1: What is the total GPU memory on this board?
A: The board provides 640 GB total HBM3 memory—80 GB across 8 SXM5 GPUs.

Q2: How do the GPUs communicate within the board?
A: Via SXM5 NVLink, offering up to 900 GB/s per connection, enabling high-speed GPU-to-GPU communication.

Q3: What cooling solution does this board require?
A: It requires direct liquid cooling, typically implemented in purpose-built HX/L48 chassis. (Board is liquid-cooled ready.)

Q4: Is this board plug-and-play with standard servers?
A: No. It's designed for HGX-compatible host systems with integrated power, liquid cooling, and server infrastructure.

Q5: Can I run multi-instance workloads?
A: Yes—each H100 supports up to 7 MIG instances, suitable for concurrent multi-tenant or mixed workloads.


We Also Recommend