• Автор темы News
  • Дата начала
  • " /> News - NVIDIA Announces H100 NVL - Max Memory Server Card for Large Language Models | SoftoolStore.de - Софт, Avid Media Composer, Книги. | бесплатные прокси (HTTP, Socks 4, Socks 5)

    News NVIDIA Announces H100 NVL - Max Memory Server Card for Large Language Models

    News

    Команда форума
    Редактор
    Сообщения
    13 325
    Баллы
    358
    Offline
    #1
    While this year’s Spring GTC event doesn’t feature any new GPUs or GPU architectures from NVIDIA, the company is still in the process of rolling out new products based on the Hopper and Ada Lovelace GPUs its introduced in the past year. At the high-end of the market, the company today is announcing a new H100 accelerator variant specifically aimed at large language model users: the H100 NVL.

    The H100 NVL is an interesting variant on NVIDIA’s H100 PCIe card that, in a sign of the times and NVIDIA’s success in the AI field, is aimed at a singular market: large language model (LLM) deployment. There are a few things that make this card atypical from NVIDIA’s usual server fare – not the least of which is that it’s 2 H100 PCIe cards that come already bridged together – but the big takeaway is the big memory capacity. The combined dual-GPU card offers 188GB of HBM3 memory – 94GB per card – offering more memory per GPU than any other NVIDIA part to date, even within the H100 family.
     
    Вверх Снизу