NVIDIA A100 SXMvsNVIDIA L40S
Detailed specifications, performance benchmarks, and pricing comparison to help you choose the right GPU for your AI workloads.
NVIDIA A100 SXM
Ampere · 2020
NVIDIA L40S
Ada Lovelace · 2023
Specifications · Comparison
Side by side specs.
| Spec | NVIDIA A100 SXM | NVIDIA L40S |
|---|---|---|
| Architecture | Ampere | Ada Lovelace |
| VRAM | 80 GB+66.7% | 48 GB |
| Memory Type | HBM2e | GDDR6 |
| Memory Bandwidth | 2039 GB/s+136.0% | 864 GB/s |
| FP32 Performance | 19.5 TFLOPS | 91.6 TFLOPS+78.7% |
| FP16 Performance | 78 TFLOPS | 183 TFLOPS+57.4% |
| INT8 Performance | 624 TOPS | 733 TOPS+14.9% |
| TDP | 400W | 350W |
| Form Factor | SXM | PCIe |
| Price (avg/hr) | $1.05 | $0.85 |
Performance · Analysis
Performance breakdown.
Compute (FP32)
Raw single-precision floating point throughput
NVIDIA L40S is 78.7% faster
Training (FP16)
Half-precision performance for deep learning training
NVIDIA L40S is 57.4% faster
Inference (INT8)
Integer performance for model inference workloads
NVIDIA L40S is 14.9% faster
Memory Bandwidth
Data transfer rate between memory and compute units
NVIDIA A100 SXM is 136% faster
Best Compute
NVIDIA L40S
Most Memory
NVIDIA A100 SXM
Best Training
NVIDIA L40S
Best Value
NVIDIA L40S
Pricing · Cost
Cost comparison.
Hourly
Save $0.20 with NVIDIA L40S
Daily
Save $4.80 with NVIDIA L40S
Monthly
Save $144.00 with NVIDIA L40S
Use Cases · Workloads
Best for your workload.
NVIDIA A100 SXM
NVIDIA L40S
Platform · Benefits
Why Runcrate.
Instant Deployment
Get your GPU instance running in minutes with pre-configured AI environments. No setup complexity.
Pay Per Hour
Only pay for the compute you actually use. Prepaid credits with transparent, per-hour billing.
Reliable Infrastructure
Enterprise-grade reliability with automatic failover and data persistence across sessions.
Related · Comparisons
Compare other GPUs.
FAQ · Questions
Common questions.
Deploy NVIDIA A100 SXM or NVIDIA L40S
Get started with GPU cloud computing in minutes. No setup complexity, no long-term commitments.