The NVIDIA H200 Tensor Core GPU is a powerful, highly optimized processing platform for generative AI, inference, and other demanding GPU-accelerated applications. It supports floating-point operations with precision from FP64 to FP8 as well as INT8 (integer) calculations - making it a single accelerator for pretty much every compute workload. With 141GB of HBM3e high-bandwidth memory providing 4.8TB/s of bandwidth, it also outpaces traditional consumer and even professional video cards by a wide margin when it comes to memory-intensive jobs.
This H200 NVL variant allows pairs of cards to be connected via three bridges so they can utilize NVLink: a very high-speed interconnect for improved multi-GPU operation. Each card is housed in a traditional 2-slot graphics card form factor and fits in a standard PCI-Express 5.0 x16 slot, but the fanless heatsink means these are only functional in purpose-built systems that have been designed to fit and cool such powerful GPUs.
This H200 NVL variant allows pairs of cards to be connected via three bridges so they can utilize NVLink: a very high-speed interconnect for improved multi-GPU operation. Each card is housed in a traditional 2-slot graphics card form factor and fits in a standard PCI-Express 5.0 x16 slot, but the fanless heatsink means these are only functional in purpose-built systems that have been designed to fit and cool such powerful GPUs.
Specifications
Chipset Manufacturer | NVIDIA |
Product Category | Data Center |
Motherboard Connection | PCI Express 5.0 x16 |
Cooling Method | Passive Heatsink |
Core Specifications | |
Processors | 16896 |
Memory Specifications | |
Onboard Memory | 141GB |
Memory Type | HBM3e |
Memory Bus Width | 6144-bit |
Bandwidth | 4890000 GB/s |
Performance | |
Double Precision Floating Point (Peak) | 30 Tflops |
Single Precision Floating Point (Peak) | 60 Tflops |
Power Connectors | |
Plug 1 | 16-pin PCIe |
Dimensions | |
Length | 267 mm (10.5 in) |
Height | 112 mm (4.4 in) |
Width | 42 mm (1.7 in) |
Net Weight | 1.28 kg (2.8 lbs) |