Commerce

Amazon Polly’s Brand Voices taps AI to generate custom spokespeople


If Amazon has its way, companies will soon tap Amazon Web Services (AWS) en masse to create AI-generated voices tailored to their brands. The Seattle tech giant today launched Brand Voices, a fully managed service within Amazon Polly, Amazon’s cloud service that converts text into lifelike speech, that pairs customers with Amazon engineers to build voices that represent certain personas.

As Amazon director of text-to-speech Rafal Kuklinski and Amazon Polly senior product manager Ankit Dhawan explained in a blog post, Brand Voice allows organizations to differentiate their brand by incorporating unique vocal identities into their products and services. “This opens up a breadth of opportunities to create custom voices with a … speaking style that [companies] and brand[s] identify with,” they wrote.

Amazon says it worked with KFC in Canada to build a voice in a Southern U.S. English accent for the restaurant chain’s brand ambassador — Colonel Sanders — within KFC’s latest app for Amazon’s Alexa assistant. Separately, it designed an Australian English voice for National Australia Bank, which launched as a part of a broader NAB contact center migration to Amazon Connect, Amazon’s omnichannel cloud contact center.

Here’s a sample of the Colonel Sanders voice:


And here’s NAB’s custom voice:

Amazon detailed its work on Neural Text-To-Speech in a research paper late last year (“Effect of data reduction on sequence-to-sequence neural TTS”), in which researchers described a system that can learn to adopt a new speaking style from just a few hours of training — as opposed to the tens of hours it might take a voice actor to read in a target style.

Amazon’s AI model consists of two components. The first is a generative neural network that converts a sequence of phonemes into a sequence of spectrograms, or visual representations of the spectrum of frequencies of sound as they vary with time. The second is a vocoder that converts those spectrograms into a continuous audio signal.

The phoneme-to-spectrogram interpreter network is sequence to sequence, meaning it doesn’t compute an output solely from the corresponding inputs, instead considering its position in the sequence of outputs. Scientists at Amazon trained it with phoneme sequences and corresponding sequences of spectrograms, in addition to a “style encoding” that identified the specific speaking style used in the training example. The model’s output was fed into a vocoder that can take spectrograms from any speaker, regardless of whether they were seen during training time.

The end result? An AI model-training method that combines a large amount of neutral-style speech data with only a few hours of supplementary data in the desired style, and an AI system capable of distinguishing elements of speech both independent of a speaking style and unique to that style.

With Brand Voices and its other neural text-to-speech services, Amazon is effectively going toe to toe with Google, which recently debuted 31 new AI-synthesized WaveNet voices and 24 new standard voices in its Cloud Text-to-Speech service (bringing the total number of WaveNet voices to 57). It has another rival in Microsoft, which offers three AI-generated voices in preview and 75 standard voices via its Azure Speech Service API.



READ SOURCE

This website uses cookies. By continuing to use this site, you accept our use of cookies.