Tech Alert: Reducing Operational Costs of AI/ML Architectures


SANTA CLARA, Calif.–()–Artificial Intelligence and Machine Learning applications are becoming ubiquitous in companies of all sizes, across all industries, and require significant up-front investment in hardware. Along with initial costs, however, come ongoing operational costs of running these data-intensive workloads. Experts at Quobyte® Inc., a leading developer of modern storage system software, offer the following tips for reducing the operational expenses of AI/ML infrastructures.

Smart storage

AI/ML workloads have different performance profiles, including the high-throughput, low-latency requirements in the model training stage, plus there can be large-block sequential, small-block random, or mixed general workloads during the ingest or other stages. The storage must be performant enough to keep up with the varying requirements. GPUs are one of the most expensive assets in the whole system and should be kept busy, not wastefully underutilized.

Avoid point solutions

There should be no need to build and maintain a separate storage architecture for AI/ML. It’s tempting to keep production workloads on a different system, but cobbling together two or three products to handle AI/ML data requirements adds a layer of complexity, and incurs management and maintenance costs. Instead of an isolated infrastructure, use the performance and capacity resources you have.

No more tiers

Likewise, avoid cumbersome tiering. A unique characteristic of AI/ML workloads is that all data is conceivably hot data. Training data is used very frequently, and should remain accessible. Typical tiering strategies introduce bottlenecks and can ultimately add to the overall costs, when you probably intended to save. Rather than using different storage based on whether data is hot or cold, mix HDD and SSD to get the best price/performance for your needs.

Eliminate migration

With one tier-free, unified storage infrastructure throughout the data center, you can eliminate the time-consuming and error-prone process of copying data from stage to stage and locality to locality. Data should never leave the file system at any point during the entire AI/ML lifecycle.

Consider cost of scale

AI/ML data sets easily grow to the hundreds of petabytes. A high-capacity storage solution may be as easy to manage as a single box today, but will require exponentially more manual labor when you have 200 of them. Maintenance looks very different at scale when drive failures, network issues, broken hardware, or updates become a daily occurrence.

Be ready for change

As project requirements change – sometimes more quickly than anticipated – the infrastructure must adapt. Downtime equals dollars wasted as GPUs sit idle. You should be able to add disks or servers when you need more performance, without any interruption to applications or services. Storage software can automatically detect and remove broken hardware; self-healing capabilities will use available resources elsewhere in the cluster until it can be replaced when convenient.

“Hardware and GPUs are a big up-front investment, but there are running costs to consider and the way the AI/ML infrastructure is built can add to or mitigate it,” said Bjorn Kolbeck, Quobyte CEO. “Containing ongoing costs starts with getting the storage performance to take full advantage of your investment, and building a flexible environment that grows when needed without a maintenance and management overhead.”

Follow Quobyte

https://www.twitter.com/quobyte

https://www.linkedin.com/company/quobyte

https://www.facebook.com/quobyte

About Quobyte

Building on a decade of research and experience with the open-source distributed file system XtreemFS and from working on Google’s infrastructure, Quobyte delivers on the promise of software-defined storage for the world’s most demanding application environments including High Performance Computing (HPC), Machine Learning (ML), Media & Entertainment (M&E), Life Sciences, Financial Services, and Electronic Design Automation (EDA). Quobyte uniquely leverages hyperscaler parallel distributed file system technologies to unify file, block, and object storage. This allows customers to easily replace storage silos with a single, scalable storage system – significantly saving manpower, money, and time spent on storage management. Quobyte allows companies to scale storage capacity and performance linearly on commodity hardware while eliminating the need to expand administrative staff through the software’s ability to self-monitor, self-maintain, and self-heal. Please visit www.quobyte.com for more information.





READ SOURCE

READ  Agricultural workers may soon be made of tech and steel

LEAVE A REPLY

Please enter your comment!
Please enter your name here