NVIDIA Tesla A100 80 GB SXM4 GPU (699‑2G506‑0210‑300) - Used / Tested

NVIDIA

  • $11,500.00
    Unit price per 
Shipping calculated at checkout.


NVIDIA Tesla A100 80 GB SXM4 GPU (699‑2G506‑0210‑300)

Industry-Leading Ampere GPU with 80 GB HBM2e and NVLink for AI & HPC


The NVIDIA Tesla A100 80 GB SXM4 GPU is built on the Ampere architecture, delivering unparalleled performance for deep learning, high-performance computing, and data analytics. With its 80 GB of HBM2e memory and 2 TB/s bandwidth, it excels at large-scale model training and multi-GPU inference. Available in the SXM4 form factor with NVLink for optimized multi-GPU scaling, this module suits data center GPU-accelerated servers and AI clusters.


🔧 Product Specifications for NVIDIA Tesla A100 80 GB SXM4 GPU

Feature Details
Model / Part Number NVIDIA Tesla A100 80 GB SXM4 (699‑2G506‑0210‑300)
Architecture NVIDIA Ampere (GA100 GPU)
CUDA Cores 6,912
Tensor Cores 432
GPU Memory 80 GB HBM2e
Memory Bandwidth 2,039 GB/s
FP64 Performance 9.7 TFLOPS
FP32 Performance 19.5 TFLOPS
TF32 (Tensor Float32) 156 TFLOPS
FP16 / BF16 312 TFLOPS
INT8 Tensor Core 1,248 TOPS
Form Factor SXM4 module (for GPU-accelerated servers)
Cooling Passive (server-provided airflow)
NVLink Bandwidth Up to 600 GB/s
MIG Support Up to 7 GPU instances
TDP 400 W
Weight ~2 lb (heatsink only); full module ~11 lb estimated
Die Size / Transistors 826 mm², ~54B transistors
Certifications NVLink, MIG, DGX/AEP compatible

❓ Frequently Asked Questions (FAQs)

Q1: What workloads is the A100 SXM4 best suited for?
A: It’s ideal for deep learning training, HPC simulations, large language model workloads, and AI inference at scale.

Q2: How does SXM4 differ from PCIe variants?
A: SXM4 provides higher power (400 W), superior cooling, and NVLink support—offering up to 600 GB/s interconnect bandwidth, enhancing multi-GPU performance.

Q3: Does it support Multi-Instance GPU (MIG)?
A: Yes, it supports up to 7 independent GPU instances, enabling fine-grained resource partitioning.

Q4: What precision formats does the A100 support?
A: Supports FP64, FP32, TF32, FP16, BF16, INT8, and INT4—excellent for both training and inference performance.

Q5: What are thermal and power requirements?
A: Requires a 400 W thermal envelope with robust server airflow; the heatsink alone weighs ~2 lb.


We Also Recommend