xcompute

NVIDIA® H100
Tensor Core GPU

New, next-generation Tensor Core GPUs based on the latest NVIDIA
Hopper architecture.

The World’s Proven Choice for En-
terprise
AI NVIDIA® H100 GPUs
NOW AVAILABLE

The NVIDIA® H100, powered by the new Hopper architecture, is a flagship GPU
offering powerful AI acceleration, big data processing, and high-performance
computing (HPC).

With H100 SXM you get:
Seeking additional computing power and generating AI models
Enhanced scalability
High-bandwidth GPU-to-GPU communication
Optimal performance density
Reserve today !

Tech Specs

  • Form Factor
  • H100 SXM
  • H100 PCle
  • GPU memory
  • 80 GB
  • 80 GB
  • GPU memory bandwidth
  • 3.35 TB/s
  • 2 TB/s
  • Max thermal design Power (TDP)
  • Up to 700W (configurable)
  • 300-350W (configurable)
  • Multi-instance GPUs
  • Up to 7 MIGS @ 10 GB each
  • Form factor
  • SXM
  • PCle
    Dual-slot air-cooled
  • interconnect
  • NVLink: 900GB/s
    PCle Gen5 128GB/s
  • NVLink: 600GB/s
    PCle Gen5: 128GB/s
Download H100 Datasheet