Comprehensive comparison for AI, machine learning, and high-performance computing workloads.
Hopper Architecture
Hopper Architecture
| Specification | NVIDIA H100 SXM | NVIDIA H100 PCIe |
|---|---|---|
| Architecture | Hopper | Hopper |
| Release Year | 2022 | 2022 |
| VRAM | 80 GB | 80 GB |
| Memory Type | HBM3 | HBM3 |
| Memory Bandwidth | 3350 GB/s+67.5% | 2000 GB/s |
| FP32 Performance | 60 TFLOPS+17.6% | 51 TFLOPS |
| FP16 Performance | 120 TFLOPS+17.6% | 102 TFLOPS |
| INT8 Performance | 2400 TOPS+17.6% | 2040 TOPS |
| Tensor Cores | 16896 | 14592 |
| CUDA Cores | 16896 | 14592 |
| TDP | 700W | 350W |
| Form Factor | SXM | PCIe |
| NVLink Support | Yes | No |
| Avg. Price/Hour | $1.5+7.1% | $1.4 |
Single-precision floating-point performance for general compute workloads
NVIDIA H100 SXM is 17.6% faster
Half-precision performance optimized for deep learning training
NVIDIA H100 SXM is 17.6% faster
Integer performance for efficient model inference and deployment
NVIDIA H100 SXM is 17.6% faster
Data transfer speed between GPU and memory
NVIDIA H100 SXM is 67.5% faster
NVIDIA H100 SXM
NVIDIA H100 PCIe
NVIDIA H100 PCIe
NVIDIA H100 PCIe
Enterprise-grade infrastructure
Get a custom quote in 24 hours for reserved GPU clusters with high-speed interconnect, any region, any GPU model, and any number of GPUs you need.
Any GPU
Choose your hardware
Any Quantity
Scale as needed
Any Region
Global availability
Interconnect
High-speed networking
Go from comparison to running workload in under 60 seconds. No complex setup required.
Only pay for what you use. Stop instances anytime. No hidden fees or long-term commitments.
Enterprise-grade infrastructure with 99.9% uptime. Trusted by AI teams worldwide.
Explore more GPU comparisons to find the perfect match for your workload