In a follow-up to new compute, network and data service offerings announced by AWS CEO Adam Selipsky earlier this week, Amazon’s vice president of AI, Swami Sivasubramanian, pulled the covers off some updates to database, machine learning and serverless offerings.
Taking a cue from Selipsky’s theme of simplifying AWS’ array of services in order to make them easier to consume for developers and enterprises, Sivasubramanian announced three new updates to AWS’ plethora of database offerings. They include a new managed database service for business applications that allows developers and enterprises to customize the underlying database and operating system; a new table class for Amazon DynamoDB designed to reduce storage costs for infrequently accessed data; and a service that uses machine learning to better diagnose and remediate database-related performance issues.
AWS simplifies database customization
The new managed database service, Amazon RDS (Relational Database Service) Custom, is aimed at customers whose applications require customization at the database level and thus are responsible for administrative tasks such as provisioning, database setup, patching and backups that take up a lot of time, Sivasubramanian said.
Amazon RDS Custom automates these administrative processes while allowing customization to the database and underlying operating system these applications require, Sivasubramanian said.
“RDS Custom allows users to configure their RDS instances to exactly mimic the databases from which they have migrated,” Carl Olofson, research vice president at IDC, said. “The service becomes necessary because every relational database management system has its quirks, and some applications are developed taking them into account. Since generic RDS instances do not reflect those quirks, the application misbehaves. This overcomes that problem.”
Olofson added that while Oracle databases are now currently supported, support Microsoft SQL Server and associated tools are forthcoming.
AWS aims to reduce data storage costs
In order to reduce the cost of storing and accessing less frequently used data for developers and enterprises, AWS released a new table class called Amazon DynamoDB Standard-Infrequent Access (Standard-IA). A table class, akin to rows and tables in a spreadsheet, is an object that classifies and keeps data organized in a database.
The new table class is aimed at enterprises that store huge amounts of data in non-relational databases and also need to access old data immediately, according to Sivasubramanian.
With the new Amazon DynamoDB Standard-IA table class, customers can reduce DynamoDB costs by up to 60% for tables that store infrequently accessed data, Sivasubramanian said, adding that the new table class eliminates the need for enterprise customers to write code to move infrequently accessed data from DynamoDB to lower-cost storage alternatives like Amazon S3.
The advantage of this service, according to Olofson, is that the infrequently accessed data, when called, can be accessed at the same speed as live data.
Machine learning for devops
He said that the service uses machine learning to help developers better detect and diagnose hard-to-find, database-related performance issues and provides recommendations designed to resolve them in minutes as opposed to days.
The launch of this service pitches AWS directly against other cloud service providers such as Oracle and Microsoft. “DevOps Guru for RDS can be compared to Oracle Autonomous Database. Microsoft claims that such features are also built into Azure SQL Database,” Sivasubramanian said.
Easing machine learning for business users
In the race to up-sell more of its machine learning services, AWS has adopted the narrative of “democratization of machine learning” since 2018, focusing on making its machine learning services available and accessible to as many enterprise users as possible with its SageMaker platform.
Recognizing that more and more business users are seeking access to machine learning tools, AWS earlier this week released its SageMaker Canvas platform along with updates to several machine learning services.
While Canvas is a visual no-code platform, the other updates are targeted toward accelerating the use of other machine learning techniques for enterprises.
One such update is the Amazon SageMaker Ground Truth Plus, which builds on the 2018 release of Amazon SageMaker Ground Truth that AWS had released to help enterprises label large data sets using human annotators via Amazon Mechanical Turk or in-house or third-party vendors.
In contrast to human annotators, the Ground Truth Plus service enables a labelling workflow that includes prelabelling powered by machine learning models; machine validation of human labelling to detect errors and low-quality labels; and assistive labelling features to reduce the time required to label data sets and shrink the cost of procuring high-quality annotated data, Sivasubramanian said.
He added that developers can follow the entire workflow via dashboards to inspect the annotation progress and samples of completed labels for quality.
Another update to AWS’ existing machine learning services is the Amazon SageMaker Studio set of universal notebooks, designed to provide an integrated environment allowing enterprise users to perform data engineering, analytics and machine learning.
With the introduction of this tool, data scientists and engineers no longer need to switch between multiple tools and notebooks when they are ready to integrate data across analytics or machine learning environments, Sivasubramanian said, adding that the environment also supports tasks such as querying data sources, exploring metadata and schemas, and processing jobs for analytics or machine learning workflows.
Reducing machine learning compute costs
In order to further accelerate the data training process and reduce the cost of compute for machine learning, AWS released a new service named Amazon SageMaker Training Compiler.
The compiler, which supports TensorFlow and PyTorch in Amazon SageMaker, is a machine learning model compiler that automatically optimizes code with a single click and is designed to use compute resources more effectively and reduce the time it takes to train models by up to 50%, Sivasubramanian said.
In another effort to make AWS machine learning services easier to use, Sivasubramanian also announced the release of Amazon SageMaker Inference Recommender and SageMaker Serverless Inference for machine learning models.
While the former automatically recommends the configuration that a particular instance or data model needs to run on in order to save cost or deployment time, the latter offers pay-as-you-go pricing for machine learning models deployed in production.
Explaining further, Sivasubramanian said that data scientists can use Amazon SageMaker Inference Recommender to run a performance benchmark simulation across a range of selected compute instances in SageMaker to evaluate the tradeoffs between different configuration settings including latency, throughput, cost, compute, and memory.
The SageMaker-related machine learning services are a differentiated way for AWS to up-sell more services, Holger Mueller, vice president and principal analyst at Constellation Research, said.
Some of the machine learning services are tailored to help customers avoid picking the wrong instance for AI workloads, Mueller said. “You also have to keep in mind that it may be difficult for enterprise users to navigate the AWS instance field and this is another way of keeping the customer happy,” he noted.
In an effort to further train people on its machine learning services, AWS launched the Amazon SageMaker Studio Lab. The lab gives users access to a no-cost version of Amazon SageMaker — an AWS service that helps customers build, train, and deploy machine learning models, Sivasubramanian said. He added that the company is also announcing a new $10 million education and scholarship program designed to prepare underrepresented and underserved students globally for careers in machine learning.
Copyright © 2021 IDG Communications, Inc.