Nvidia Launches Hopper H100 GPU, New DGXs and Grace Superchips

3 листопада 2022
The battle for datacenter dominance keeps getting hotter. Today, Nvidia kicked off its spring GTC event with new silicon, new software and a new supercomputer. Speaking from a virtual environment in the Nvidia Omniverse 3D collaboration and simulation platform (adieu “kitchen keynote”), CEO Jensen Huang introduced the new Hopper GPU architecture and the H100 GPU, which will power datacenter-scale systems for HPC and AI workloads.

Підпис для зображення1

Підпис для зображення2

Nvidia’s first Hopper-based product, the H100 GPU, is manufactured on TSMC’s 4N process, leveraging a whopping 80 billion transistors – 68 percent more than the prior-generation 7nm A100 GPU. The H100 is the first GPU to support PCIe Gen5 and the first to utilize HBM3, enabling 3TB/s of memory bandwidth.
Named after computer scientist and U.S. Navy Rear Admiral Grace Hopper, the new GPU (in its SXM mezzanine form factor) provides 30 teraflops of peak standard IEEE FP64 performance, 60 teraflops of peak FP64 tensor core performance, and 60 teraflops of peak FP32 performance. A new numerical format introduced in Hopper, FP8 tensor core, delivers up to 4,000 theoretical teraflops of AI performance, according to Nvidia. See spec info and gen-to-gen comparisons below.

Підпис для зображення3

Hopper introduces built-in acceleration for transformer models, which are widely used for natural language processing. The Hopper Transformer Engine dynamically chooses between 8-bit and 16-bit calculations, intelligently managing precision in the layers of the transformer network to deliver speedups without loss of accuracy, according to Nvidia.
“Hopper H100 is the biggest generational leap ever — 9x at-scale training performance over A100 and 30x large-language-model inference throughput,” Huang said in his keynote.
Hopper’s second-generation Multi-Instance GPU (MIG) technology enables a single GPU to be partitioned into seven smaller, fully isolated instances. Hopper also introduces new DPX instructions that can be used by a number of algorithms, including route optimization and genomics to accelerate dynamic programming by 7x compared with previous-generation GPUs and 40x compared with CPUs, according to Nvidia.
Hopper features new fourth-generation Nvidia NVLink technology, which for the first time extends outside the server in the form of the new NVLink Switch. The switch system connects up to 256 H100 GPUs (32-node DGX Pods) at 9x higher bandwidth versus the previous generation using Nvidia HDR Quantum InfiniBand.

Підпис для зображення

In addition to the SXM form factor that will be used inside DGX and HGX systems, Hopper will also be offered in an H100 PCIe form factor GPU, two of which can be linked via an NVLink bridge. Hopper will further be available as a new converged accelerator, the H100 CNX, that pairs the H100 with a ConnectX-7 SmartNIC, which both natively run PCIe Gen 5. The H100 CNX can be deployed in mainstream servers via the PCIe connection to the CPU. “In the mainstream server with four GPUs, H100 CNX will boost the bandwidth to the GPU by four times and, at the same time, free up the CPU to process other parts of the application,” said Paresh Kharya, senior director of product management and marketing at Nvidia, in a pre-briefing held for media and analysts.
Charlie Boyle, vice president and general manager of DGX Systems at Nvidia, highlighted another benefit: being able to access the benefits of PCIe Gen 5 before the server products come to market. “The servers that you can buy today are still PCIe Gen 4, but with the combination H100 CNX card, it gives us the advantage of running full Gen 5 networking from the network directly to the GPU without involving the CPU. As the CNX cards are available, customers can upgrade to Hopper and get most of the advantage of the full PCIe Gen 5 system without actually needing to change their infrastructure waiting for the manufacturers to get to Gen 5,” Boyle said.

Підпис для зображення1

During the keynote, Huang shared a few more details about Nvidia’s upcoming Grace Arm CPU, and announced “Grace Superchips” in two form factors. The combined CPU+GPU SoC that was revealed last year now has an officially confirmed name: the Grace Hopper Superchip. Designed for giant-scale AI and HPC, the platform provides 600GB memory on the GPU and features a 900 gigabytes per second coherent interconnect, called NVLink chip-to-chip (C2C).

Джерело: TechSpot

Ще статті
ДЕНЬ НЕЗАЛЕЖНОСТІ УКРАЇНИ 2024 !
22 серпня 2024
Прийміть від нашої компанії найщиріші вітання з 33-м Днем народження нашої Батьківщини!Цього року в цей день 24 серпня 2024року субота - вихідний день!Ми сильні. Ми вільні. Ми – непереможні!Слава Україні!
З Великоднем! Графік роботи 2024
4 травня 2024
Графік роботи компанії на святкові дні
viber-chatЧат «Дефіс» в Viber telegram-chatЧат «Дефіс» в Telegram