NVIDIA H100 SXM

Hopper Architecture

Active

Launched September 2022

Core Specifications

VendorNVIDIA
ArchitectureHopper
Form FactorSXM
VRAM80 GB
Memory Bandwidth3,350 GB/s
TDP700 W

Compute Performance

PrecisionTFLOPs
FP3267
FP161,979
BF161,979
FP83,958

Performance Benchmarks

image gen

ConfigurationPrecisionPerformanceSource
Stable Diffusion XL, 1024x1024, 50 steps2.375 images_per_secondView

llm inference

ConfigurationPrecisionPerformanceSource
LLaMA 70B, batch_size=17,600 tokens_per_secondView

llm train

ConfigurationPrecisionPerformanceSource
GPT-3 175B equivalent, multi-node757.895 hours_to_trainView
LLaMA 70B, batch_size=32BF1614,250 tokens_per_secondView

Pricing

Hardware Purchase (CAPEX)

TypePrice (USD)RegionAs of
Street Price$32,000GlobalOct 2024
Street Price$30,000globalDec 2025

Cloud Rental (OPEX)

ProviderInstance TypePrice per HourRegionAs of
Salad1x H100 (94GB NVL)$0.99/hrGlobalOct 2024
AzureStandard_ND96isr_H100_v5 (8x H100)$119.60/hrEast USOct 2024
AzureStandard_ND96isr_H100_v5 (1x H100)$14.95/hrEast USOct 2024
Google Clouda3-highgpu-8g (8x H100)$117.84/hrus-central1Oct 2024
Google Clouda3-highgpu-1g (1x H100)$14.73/hrus-central1Oct 2024
AWSp5.48xlarge (8x H100)$98.32/hrus-east-1Oct 2024
AWSp5.xlarge (1x H100)$12.29/hrus-east-1Oct 2024
Lambda Labs$3.50/hrus-eastDec 2025

Quick Stats

Peak Performance
1,979
TFLOPs (BF16)
Efficiency
2.83
TFLOPs per Watt
Price per TFLOPs
$16
USD per TFLOPs (BF16)

Supported Platform

This NVIDIA accelerator can be orchestrated and managed with hosted·ai

Learn more at hosted·ai

Similar XPUs

View other NVIDIA GPUs or compare across vendors

NVIDIA H100 SXM - Specs, Benchmarks & Pricing | xpu.wiki