Big Data

Do cloud-based genAI services have an enterprise future? – Computerworld



Amazon’s list of AI cloud clients now includes ADP, Delta Air Lines, GoDaddy, Intuit, Pfizer, and Siemens.

Currently, cloud computing leads all other methods for delivering genAI applications to enterprises; that’s because of the high cost of building out proprietary infrastructure. Amazon Web Services, Google, IBM, Microsoft and Oracle have invested billions of dollars in AI cloud offerings since OpenAI set off a firestorm of adoption with the launch of ChatGPT in November 2022.

“No one but the hyperscalers and mega large companies can afford to train and operate the very large LLMs and foundation models,” said Avivah Litan, Gartner distinguished vice president analyst. “The costs are in the hundreds of millions of dollars.”

By “large” Litan was referring to models with hundreds of billions of parameters, as opposed, to say, those with fewer than 100 billion parameters. The costs to use LLMs supplied over cloud services, however, “are relatively manageable by enterprises and for now are also subsidized by the hyperscalers,” Litan said.

However, as enterprises continue to grow their pilots of genAI applications, the cost of cloud services can become a limiting factor. Instead, many organizations are looking to deploy smaller, on-premises LLMs aimed at handling specific tasks.

Smaller domain-specific models trained on more data will eventually challenge the dominance of today’s leading LLMs, including OpenAI’s GPT 4, Meta AI’s LLaMA 2, and Google’s PaLM 2. Smaller models would also be easier to train for specific use cases, according to Dan Diasio, Ernst & Young’s Global Artificial Intelligence Consulting Leader.



READ SOURCE

This website uses cookies. By continuing to use this site, you accept our use of cookies.