Cloud

The temptation of AI as a service


Back in the early days of the cloud, I had a nice little business taking enterprise applications and reengineering them so they could be delivered as software-as-a-service cloud assets. Many enterprises believed that their custom application, which provided value by addressing a niche need, could be resold as a SaaS service and become another source of income.

I saw a tire company, a healthcare company, a bank, and even a bail-bond management company attempt to become cloud players before infrastructure as a service was a thing. Sometimes it worked out.

The key hindrance was that the companies wanted to own a SaaS asset but were less interested in actually running it. They would need to invest a great deal of money to make it work, and most were not willing to do it. Just because I could turn their enterprise application into a multitenant SaaS-delivered asset did not mean that they should have done it.

“Can” and “should” are two very different things to consider. In most of those cases, the SaaS system ended up being consumed only within the company. In other words, they built an infrastructure with themselves as the only customer.

New generative AI services from AWS

AWS has introduced a new feature aimed at becoming the prime hub for companies’ custom generative AI models. The new offering, Custom Model Import, launched on the Amazon Bedrock platform (enterprise-focused suite of AWS) and provides enterprises with infrastructure to host and fine-tune their in-house AI intellectual property as fully managed sets of APIs.

This move aligns with increasing enterprise demand for tailored AI solutions. It also offers tools to expand model knowledge, fine-tune performance, and mitigate bias. All of these are needed to drive AI for value without increasing the risk of using AI.

In the case of AWS, the Custom Model Import allows model integrations into Amazon Bedrock, where they join other models, such as Meta’s Llama 3 or Anthropic’s Claude 3. This provides AI users the advantage of managing their models centrally alongside established workflows already in place on Bedrock.

Moreover, AWS has announced enhancements to the Titan suite of AI models. The Titan Image Generator, which translates text descriptions into images, is shifting to general availability. AWS remains guarded about the specific training data for this model but indicates it involves both proprietary data and licensed, paid-for content.

Of course, AWS can leverage these models for its own purposes or offer them as cloud services to its partners and other companies willing to pay. By the way, AWS did not assert this. I’m just looking at how many enterprises will view the investment made to move to LLM hosting, both for others, for AI as a service, and for their own use. We learned our lesson with the SaaS attempt of 20 years ago, and most enterprises will build and leverage these models for their own purposes.

Vendors, such as AWS, say that it’s easier to build and deploy AI on their cloud platform rather than on your own. However, if the price gets too high, I suspect we’ll see some repatriation of these models. Of course, many will find that once they leverage the native services on AWS, they could be stuck with that platform, or else pay for the conversion costs of running their AI in-house or on another public cloud provider.

What does this mean for you?

We’re going to see a ton of these types of releases in the next year or so as public cloud providers look to lock in more business on their AI services. They are going to release these in an accelerated manner, given that the “AI land grab” is going on now. Once customers get hooked on AI services, it’s going to be difficult to get off them.

I won’t assign any ill intent to the public cloud providers for these strategies, but I will point out that this was also the basic strategy for selling cloud storage back in 2011. Once you’re using the native APIs, you’re not likely to move to other clouds. Only when things become too expensive do businesses consider repatriation or moving to an MSP or colo provider.

So, this is an option for those looking to host and leverage their own AI models in a scalable and convenient way. Again, this is the path of least resistance, meaning quicker and cheaper to deploy—at first.

The larger issue is business viability. We’ve learned from our cloud storage experiences and computing experiences that just because buying something is easier than do-it-yourself options, that may not make it the right choice for the long term.

We need to do the math and understand the risk of lock-in and the longer-term objectives of how enterprises want to learn this technology. I fear we’ll make quick decisions and end up regretting them in a few years. We’ve seen that movie before, for sure.

Copyright © 2024 IDG Communications, Inc.



READ SOURCE

This website uses cookies. By continuing to use this site, you accept our use of cookies.