Compare XPUs

Select up to 5 XPUs to compare side-by-side

Select XPUs to Compare

Showing 72 XPUs5 selected

Alibaba

Hanguang 800

AMD

MI100

23.1 TFLOPs

AMD

MI210

181 TFLOPs

AMD

MI250X

383 TFLOPs

AMD

MI300A

980.6 TFLOPs

AMD

MI300X

1,307 TFLOPs

AMD

MI325X

1,400 TFLOPs

AMD

MI350X

2,100 TFLOPs

AMD

Radeon PRO W7900

122 TFLOPs

AMD

Radeon RX 7900 XT

104 TFLOPs

AMD

Radeon RX 7900 XTX

122 TFLOPs

AWS

Inferentia2

190 TFLOPs

AWS

Trainium

190 TFLOPs

AWS

Trainium2

680 TFLOPs

Baidu

Kunlun II

Biren Technology

BR100

Cambricon

MLU370

256 TFLOPs

Cerebras

WSE-3

Enflame Technology

CloudBlazer T20

Etched

Sohu

10,000 TFLOPs

FuriosaAI

Warboy

Google

TPU v4

275 TFLOPs

Google

TPU v5e

197 TFLOPs

Google

TPU v5p

459 TFLOPs

Google

TPU v6e (Trillium)

918 TFLOPs

Graphcore

Bow IPU

Graphcore

IPU-M2000

Groq

LPU Inference Engine

Huawei

Ascend 910B

Iluvatar CoreX

BI-V150

300 TFLOPs

Intel

Data Center GPU Max 1100

177 TFLOPs

Intel

Data Center GPU Max 1550

419 TFLOPs

Intel Habana

Gaudi 2

432 TFLOPs

Intel Habana

Gaudi 3

1,835 TFLOPs

Meta

MTIA v1

Microsoft

Maia 100

700 TFLOPs

Moore Threads

MTT S80

NVIDIA

A10

125 TFLOPs

NVIDIA

A100 SXM

312 TFLOPs

NVIDIA

A40

150 TFLOPs

NVIDIA

B200

2,250 TFLOPs

NVIDIA

GB200 NVL72

360,000 TFLOPs

NVIDIA

GB200 Superchip

5,000 TFLOPs

NVIDIA

GeForce RTX 4060 Ti

44.2 TFLOPs

NVIDIA

GeForce RTX 4070

58.2 TFLOPs

NVIDIA

GeForce RTX 4070 Super

71 TFLOPs

NVIDIA

GeForce RTX 4070 Ti

80.2 TFLOPs

NVIDIA

GeForce RTX 4070 Ti Super

88.2 TFLOPs

NVIDIA

GeForce RTX 4080

97.5 TFLOPs

NVIDIA

GeForce RTX 4080 Super

104.4 TFLOPs

NVIDIA

GeForce RTX 4090

165.2 TFLOPs

NVIDIA

GeForce RTX 5070

61.6 TFLOPs

NVIDIA

GeForce RTX 5070 Ti

88 TFLOPs

NVIDIA

GeForce RTX 5080

112.6 TFLOPs

NVIDIA

GeForce RTX 5090

209.5 TFLOPs

NVIDIA

H100 PCIe

1,513 TFLOPs

NVIDIA

H100 SXM

1,979 TFLOPs

NVIDIA

H200 PCIe

1,513 TFLOPs

NVIDIA

H200 SXM

1,979 TFLOPs

NVIDIA

L4

121 TFLOPs

NVIDIA

L40S

733 TFLOPs

NVIDIA

RTX 4000 Ada Generation

53.4 TFLOPs

NVIDIA

RTX 5000 Ada Generation

130.6 TFLOPs

NVIDIA

RTX 6000 Ada Generation

182.2 TFLOPs

NVIDIA

RTX PRO 6000 Blackwell Max-Q

125 TFLOPs

NVIDIA

RTX PRO 6000 Blackwell Server Edition

250 TFLOPs

NVIDIA

RTX PRO 6000 Blackwell Workstation Edition

250 TFLOPs

Qualcomm

Cloud AI 100

50 TFLOPs

Rebellions

ATOM

SambaNova

SN40L

Tenstorrent

Grayskull

200 TFLOPs

Tenstorrent

Wormhole

364 TFLOPs

Maximum of 5 XPUs can be compared at once. Deselect one to add another.

Multi-Metric Comparison

Relative performance across 5 key metrics (normalized to 100 = best in comparison)

Compute Performance (BF16)

Memory Capacity

Power Consumption

Power Efficiency

Specifications

SpecificationNVIDIA L40SNVIDIA A40NVIDIA A10AWS Inferentia2Qualcomm Cloud AI 100
ArchitectureAda LovelaceAmpereAmpereInferentia Gen2Qualcomm AI
Form FactorPCIePCIePCIePCIe
VRAM48 GB48 GB24 GB32 GB16 GB
Memory Bandwidth864 GB/s696 GB/s600 GB/s134 GB/s
TFLOPs (FP32)91.637.431.250
TFLOPs (FP16)733150125400
TFLOPs73315012519050
TFLOPs (FP8)1,466
TDP350 W300 W150 W150 W75 W
Launch DateOct 2023Oct 2020Apr 2021Nov 2022Sep 2020

Efficiency Metrics

MetricL40SA40A10Inferentia2Cloud AI 100
TFLOPs per Watt (FP32-eq)1.050.250.420.630.67
Memory Bandwidth per GB18.0 GB/s14.5 GB/s25.0 GB/s8.4 GB/s

Performance Equivalence

How many units of each GPU are needed to match the performance of the others?

To match 1x NVIDIA L40S

NVIDIA A40
Compute (FP32-eq)
4.89x
Need 4.89x A40
FP32 Compute
2.45x
Need 2.45x A40
VRAM
1.00x
A40 has 1.00x more
Memory Bandwidth
1.24x
Need 1.24x A40
NVIDIA A10
Compute (FP32-eq)
5.86x
Need 5.86x A10
FP32 Compute
2.94x
Need 2.94x A10
VRAM
2.00x
Need 2.00x A10
Memory Bandwidth
1.44x
Need 1.44x A10
AWS Inferentia2
Compute (FP32-eq)
3.86x
Need 3.86x Inferentia2
VRAM
1.50x
Need 1.50x Inferentia2
Qualcomm Cloud AI 100
Compute (FP32-eq)
7.33x
Need 7.33x Cloud AI 100
FP32 Compute
1.83x
Need 1.83x Cloud AI 100
VRAM
3.00x
Need 3.00x Cloud AI 100
Memory Bandwidth
6.45x
Need 6.45x Cloud AI 100

To match 1x NVIDIA A40

NVIDIA L40S
Compute (FP32-eq)
0.20x
L40S is 4.89x faster
FP32 Compute
0.41x
L40S is 2.45x faster
VRAM
1.00x
L40S has 1.00x more
Memory Bandwidth
0.81x
L40S has 1.24x more
NVIDIA A10
Compute (FP32-eq)
1.20x
Need 1.20x A10
FP32 Compute
1.20x
Need 1.20x A10
VRAM
2.00x
Need 2.00x A10
Memory Bandwidth
1.16x
Need 1.16x A10
AWS Inferentia2
Compute (FP32-eq)
0.79x
Inferentia2 is 1.27x faster
VRAM
1.50x
Need 1.50x Inferentia2
Qualcomm Cloud AI 100
Compute (FP32-eq)
1.50x
Need 1.50x Cloud AI 100
FP32 Compute
0.75x
Cloud AI 100 is 1.34x faster
VRAM
3.00x
Need 3.00x Cloud AI 100
Memory Bandwidth
5.19x
Need 5.19x Cloud AI 100

To match 1x NVIDIA A10

NVIDIA L40S
Compute (FP32-eq)
0.17x
L40S is 5.86x faster
FP32 Compute
0.34x
L40S is 2.94x faster
VRAM
0.50x
L40S has 2.00x more
Memory Bandwidth
0.69x
L40S has 1.44x more
NVIDIA A40
Compute (FP32-eq)
0.83x
A40 is 1.20x faster
FP32 Compute
0.83x
A40 is 1.20x faster
VRAM
0.50x
A40 has 2.00x more
Memory Bandwidth
0.86x
A40 has 1.16x more
AWS Inferentia2
Compute (FP32-eq)
0.66x
Inferentia2 is 1.52x faster
VRAM
0.75x
Inferentia2 has 1.33x more
Qualcomm Cloud AI 100
Compute (FP32-eq)
1.25x
Need 1.25x Cloud AI 100
FP32 Compute
0.62x
Cloud AI 100 is 1.60x faster
VRAM
1.50x
Need 1.50x Cloud AI 100
Memory Bandwidth
4.48x
Need 4.48x Cloud AI 100

To match 1x AWS Inferentia2

NVIDIA L40S
Compute (FP32-eq)
0.26x
L40S is 3.86x faster
VRAM
0.67x
L40S has 1.50x more
NVIDIA A40
Compute (FP32-eq)
1.27x
Need 1.27x A40
VRAM
0.67x
A40 has 1.50x more
NVIDIA A10
Compute (FP32-eq)
1.52x
Need 1.52x A10
VRAM
1.33x
Need 1.33x A10
Qualcomm Cloud AI 100
Compute (FP32-eq)
1.90x
Need 1.90x Cloud AI 100
VRAM
2.00x
Need 2.00x Cloud AI 100

To match 1x Qualcomm Cloud AI 100

NVIDIA L40S
Compute (FP32-eq)
0.14x
L40S is 7.33x faster
FP32 Compute
0.55x
L40S is 1.83x faster
VRAM
0.33x
L40S has 3.00x more
Memory Bandwidth
0.16x
L40S has 6.45x more
NVIDIA A40
Compute (FP32-eq)
0.67x
A40 is 1.50x faster
FP32 Compute
1.34x
Need 1.34x A40
VRAM
0.33x
A40 has 3.00x more
Memory Bandwidth
0.19x
A40 has 5.19x more
NVIDIA A10
Compute (FP32-eq)
0.80x
A10 is 1.25x faster
FP32 Compute
1.60x
Need 1.60x A10
VRAM
0.67x
A10 has 1.50x more
Memory Bandwidth
0.22x
A10 has 4.48x more
AWS Inferentia2
Compute (FP32-eq)
0.53x
Inferentia2 is 1.90x faster
VRAM
0.50x
Inferentia2 has 2.00x more

Pricing

Price TypeL40SA40A10Inferentia2Cloud AI 100
CAPEX (Street Price)$10,000
OPEX (per hour)$1.50/hr
Price per TFLOPs (FP32-eq)$27