Compare XPUs

Select up to 5 XPUs to compare side-by-side

Select XPUs to Compare

Showing 72 XPUs4 selected

Alibaba

Hanguang 800

AMD

MI100

23.1 TFLOPs

AMD

MI210

181 TFLOPs

AMD

MI250X

383 TFLOPs

AMD

MI300A

980.6 TFLOPs

AMD

MI300X

1,307 TFLOPs

AMD

MI325X

1,400 TFLOPs

AMD

MI350X

2,100 TFLOPs

AMD

Radeon PRO W7900

122 TFLOPs

AMD

Radeon RX 7900 XT

104 TFLOPs

AMD

Radeon RX 7900 XTX

122 TFLOPs

AWS

Inferentia2

190 TFLOPs

AWS

Trainium

190 TFLOPs

AWS

Trainium2

680 TFLOPs

Baidu

Kunlun II

Biren Technology

BR100

Cambricon

MLU370

256 TFLOPs

Cerebras

WSE-3

Enflame Technology

CloudBlazer T20

Etched

Sohu

10,000 TFLOPs

FuriosaAI

Warboy

Google

TPU v4

275 TFLOPs

Google

TPU v5e

197 TFLOPs

Google

TPU v5p

459 TFLOPs

Google

TPU v6e (Trillium)

918 TFLOPs

Graphcore

Bow IPU

Graphcore

IPU-M2000

Groq

LPU Inference Engine

Huawei

Ascend 910B

Iluvatar CoreX

BI-V150

300 TFLOPs

Intel

Data Center GPU Max 1100

177 TFLOPs

Intel

Data Center GPU Max 1550

419 TFLOPs

Intel Habana

Gaudi 2

432 TFLOPs

Intel Habana

Gaudi 3

1,835 TFLOPs

Meta

MTIA v1

Microsoft

Maia 100

700 TFLOPs

Moore Threads

MTT S80

NVIDIA

A10

125 TFLOPs

NVIDIA

A100 SXM

312 TFLOPs

NVIDIA

A40

150 TFLOPs

NVIDIA

B200

2,250 TFLOPs

NVIDIA

GB200 NVL72

360,000 TFLOPs

NVIDIA

GB200 Superchip

5,000 TFLOPs

NVIDIA

GeForce RTX 4060 Ti

44.2 TFLOPs

NVIDIA

GeForce RTX 4070

58.2 TFLOPs

NVIDIA

GeForce RTX 4070 Super

71 TFLOPs

NVIDIA

GeForce RTX 4070 Ti

80.2 TFLOPs

NVIDIA

GeForce RTX 4070 Ti Super

88.2 TFLOPs

NVIDIA

GeForce RTX 4080

97.5 TFLOPs

NVIDIA

GeForce RTX 4080 Super

104.4 TFLOPs

NVIDIA

GeForce RTX 4090

165.2 TFLOPs

NVIDIA

GeForce RTX 5070

61.6 TFLOPs

NVIDIA

GeForce RTX 5070 Ti

88 TFLOPs

NVIDIA

GeForce RTX 5080

112.6 TFLOPs

NVIDIA

GeForce RTX 5090

209.5 TFLOPs

NVIDIA

H100 PCIe

1,513 TFLOPs

NVIDIA

H100 SXM

1,979 TFLOPs

NVIDIA

H200 PCIe

1,513 TFLOPs

NVIDIA

H200 SXM

1,979 TFLOPs

NVIDIA

L4

121 TFLOPs

NVIDIA

L40S

733 TFLOPs

NVIDIA

RTX 4000 Ada Generation

53.4 TFLOPs

NVIDIA

RTX 5000 Ada Generation

130.6 TFLOPs

NVIDIA

RTX 6000 Ada Generation

182.2 TFLOPs

NVIDIA

RTX PRO 6000 Blackwell Max-Q

125 TFLOPs

NVIDIA

RTX PRO 6000 Blackwell Server Edition

250 TFLOPs

NVIDIA

RTX PRO 6000 Blackwell Workstation Edition

250 TFLOPs

Qualcomm

Cloud AI 100

50 TFLOPs

Rebellions

ATOM

SambaNova

SN40L

Tenstorrent

Grayskull

200 TFLOPs

Tenstorrent

Wormhole

364 TFLOPs

Multi-Metric Comparison

Relative performance across 5 key metrics (normalized to 100 = best in comparison)

Compute Performance (BF16)

Memory Capacity

Power Consumption

Power Efficiency

Specifications

SpecificationNVIDIA A10NVIDIA B200AWS Inferentia2NVIDIA H100 PCIe
ArchitectureAmpereBlackwellInferentia Gen2Hopper
Form FactorPCIeSXMPCIe
VRAM24 GB192 GB32 GB80 GB
Memory Bandwidth600 GB/s8,000 GB/s2,000 GB/s
TFLOPs (FP32)31.29051
TFLOPs (FP16)1252,2501,513
TFLOPs1252,2501901,513
TFLOPs (FP8)4,500
TDP150 W1000 W150 W350 W
Launch DateApr 2021Mar 2024Nov 2022Sep 2022

Efficiency Metrics

MetricA10B200Inferentia2H100 PCIe
TFLOPs per Watt (FP32-eq)0.421.130.632.16
Memory Bandwidth per GB25.0 GB/s41.7 GB/s25.0 GB/s

Performance Equivalence

How many units of each GPU are needed to match the performance of the others?

To match 1x NVIDIA A10

NVIDIA B200
Compute (FP32-eq)
0.06x
B200 is 18.00x faster
FP32 Compute
0.35x
B200 is 2.88x faster
VRAM
0.13x
B200 has 8.00x more
Memory Bandwidth
0.07x
B200 has 13.33x more
AWS Inferentia2
Compute (FP32-eq)
0.66x
Inferentia2 is 1.52x faster
VRAM
0.75x
Inferentia2 has 1.33x more
NVIDIA H100 PCIe
Compute (FP32-eq)
0.08x
H100 PCIe is 12.10x faster
FP32 Compute
0.61x
H100 PCIe is 1.63x faster
VRAM
0.30x
H100 PCIe has 3.33x more
Memory Bandwidth
0.30x
H100 PCIe has 3.33x more

To match 1x NVIDIA B200

NVIDIA A10
Compute (FP32-eq)
18.00x
Need 18.00x A10
FP32 Compute
2.88x
Need 2.88x A10
VRAM
8.00x
Need 8.00x A10
Memory Bandwidth
13.33x
Need 13.33x A10
AWS Inferentia2
Compute (FP32-eq)
11.84x
Need 11.84x Inferentia2
VRAM
6.00x
Need 6.00x Inferentia2
NVIDIA H100 PCIe
Compute (FP32-eq)
1.49x
Need 1.49x H100 PCIe
FP32 Compute
1.76x
Need 1.76x H100 PCIe
VRAM
2.40x
Need 2.40x H100 PCIe
Memory Bandwidth
4.00x
Need 4.00x H100 PCIe

To match 1x AWS Inferentia2

NVIDIA A10
Compute (FP32-eq)
1.52x
Need 1.52x A10
VRAM
1.33x
Need 1.33x A10
NVIDIA B200
Compute (FP32-eq)
0.08x
B200 is 11.84x faster
VRAM
0.17x
B200 has 6.00x more
NVIDIA H100 PCIe
Compute (FP32-eq)
0.13x
H100 PCIe is 7.96x faster
VRAM
0.40x
H100 PCIe has 2.50x more

To match 1x NVIDIA H100 PCIe

NVIDIA A10
Compute (FP32-eq)
12.10x
Need 12.10x A10
FP32 Compute
1.63x
Need 1.63x A10
VRAM
3.33x
Need 3.33x A10
Memory Bandwidth
3.33x
Need 3.33x A10
NVIDIA B200
Compute (FP32-eq)
0.67x
B200 is 1.49x faster
FP32 Compute
0.57x
B200 is 1.76x faster
VRAM
0.42x
B200 has 2.40x more
Memory Bandwidth
0.25x
B200 has 4.00x more
AWS Inferentia2
Compute (FP32-eq)
7.96x
Need 7.96x Inferentia2
VRAM
2.50x
Need 2.50x Inferentia2

Pricing

Price TypeA10B200Inferentia2H100 PCIe
CAPEX (Street Price)$70,000
OPEX (per hour)
Price per TFLOPs (FP32-eq)$62