Cloud

Nvidia H100 Tensor Core GPUs come to Oracle Cloud


In response to growing demand for generative AI applications and large language models (LLM), Oracle Cloud Infrastructure (OCI) has made Nvidia H100 Tensor Core GPUs available on the OCI Compute platform. Nvidia L40S GPUs also will be coming to the platform soon.

Oracle said OCI Compute now offers bare-metal instances with Nvidia H100 GPUs, powered by the Nvidia Hopper architecture for AI, thus enabling an “order-of-magnitude performance leap” for large-scale AI and high-performance computing applications. The Nvidia H100 GPU is designed for resource-intensive computing tasks, including training LLM models.

Organizations using Nvidia H100 GPUs obtain as much as a 30x increase in AI inference performance and a 4x boost in AI training compared with tapping Nvidia A100 Tensor Core GPUs, Oracle said. The BM.GPU H100.8 OCI Compute shape includes eight Nvidia H100 GPUs, each with 80GB of HBM2 GPU memory.

OCI Compute bare-metal instances with Nvidia L40S GPUs will be available for early access later this year, with general availability early in 2024. Nvidia L40S GPUs, based on the Nvidia Ada Lovelace architecture for graphics, AI, and gaming, serve as a universal GPU for the data center, providing multi-workload acceleration for LLM inference and training, visual computing, and video applications.

Copyright © 2023 IDG Communications, Inc.



READ SOURCE

This website uses cookies. By continuing to use this site, you accept our use of cookies.