Cloud

AWS adds ML-based call analytics capabilities to Amazon Chime SDK


Amazon Web Services on Monday said it is adding call analytics capabilities to Amazon Chime SDK offering in a bid to reduce the time and cost of generating insights from real-time audio calls, transcription, and voice analysis.

Amazon Chime SDK is a software development kit used by developers to add messaging, audio, video, and screen-sharing capabilities to their web or mobile applications.

The new updates to Chime SDK will give developers the ability to add machine learning-based voice analytics to their applications, the company said, adding that the machine learning model can detect and classify participants expressing a positive, neutral, or negative tone.

“Voice tone analysis uses machine learning (ML) to extract sentiment from a speech signal based on a joint analysis of lexical and linguistic information as well as acoustic and tonal information,” Sébastien Stormacq, principal developer advocate at AWS, wrote in a blog post.

“Voice tone analysis for live calls are delivered in the data lake of your choice, on top of which you can create your own dashboards to visualize the data,” Stormacq added.

AWS has been seeing strong demand for call analytics capabilities, sometimes due to regulatory requirements from sectors such as banking and financial services, the company said.

“We have received similar requests from customers in Business Process Outsourcing (BPO), public sector, healthcare, telecom, and insurance industries,” Stormacq wrote, adding that analyzing calls or generating insights from them can have a positive impact on sales strategy, employee productivity and boost the overall efficiency of an enterprise.

AWS Management Console updated for easier call analytics integration

In addition to adding call analytics capabilities to the SDK, AWS has made it easier for developers to integrate these capabilities into applications using the SDK from within the AWS Management Console — the management section for all AWS services.

The tweaks made to the AWS Management Console include a graphical configuration in the Amazon Chime SDK section of the console that will allow developers to integrate analytics into audio applications without writing code or requiring any expertise in cloud infrastructure, telephony or AI, the company said.

“On the console, you can choose the AWS AI service you want to use to analyze real-time audio data: voice analytics, Amazon Transcribe, or Amazon Transcribe Call Analytics. We manage the integrations with AWS AI services and your voice-based or telephony applications,” Stormacq wrote.

The management console, according to the company, helps developers to define where they want to send the analytics data — either an Amazon Kinesis stream or an Amazon Simple Storage Service (Amazon S3) bucket.

“Voice analytics can send real-time notifications to a function deployed on AWS Lambda, or an SQS queue or Amazon Simple Notification Service topic,” Stormacq wrote.

In order to visualize these insights, enterprises need to first deliver the analyses to a data lake and then use a service such as Amazon QuickSight or Tableau to build dashboards, the company said.

“These dashboards can be embedded in apps, wikis, and portals. You can download prebuilt dashboards as AWS CloudFormation templates to deploy into your own AWS account,” Stormacq wrote.

Call analytics can also be used to generate real-time alerts by posting events to Amazon EventBridge, the company said. These alerts could be embedded within the AWS account or any other third-party application.

While the new call analytics capabilities will require no infrastructure investment, AWS will charge enterprises based on their usage. The pricing will be based on the amount of audio data analyzed per minute which varies across data center locations, the company said.

The call analytics features are currently available across US East (Ohio, N. Virginia), Asia Pacific (Singapore), and Europe (Frankfurt) regions.

Copyright © 2023 IDG Communications, Inc.



READ SOURCE

This website uses cookies. By continuing to use this site, you accept our use of cookies.