Cloud

Apple researchers improve Siri’s ability to match commands with domains


It’s no great secret that Apple’s voice assistant has plenty of room for improvement. The Cupertino company is aware of this — in June, it debuted an improved neural text-to-speech model capable of delivering a more natural-sounding voice without the use of samples. And in a newly published research paper on the preprint server Arxiv.org, a team of Apple scientists describe an approach for selecting training data for Siri’s domain classifier  — the component that chooses whether a person’s command relates to, say, their calendar rather than their alarms — that leads to a substantial error reduction with only a small percentage of examples.

As the researchers explain, Siri processes speech to suss out the intended domain with a classifier called the Domain Chooser, and to identify a given user’s intent. (For instance, it might pair “What is the weather today?” with the domain “Weather” and “What time does Starbucks close” with “LocalBusiness”) Once an utterance is matched to one of the over 60 defined domains, a component called the Statistical Parser assigns a parse label to each part of the utterance, after which the domain and parse labels predicted by the Domain Chooser and Statistical Parser are mapped into an intent representation that kicks off the appropriate action.

The Domain Chooser is a multi-class system consisting of seven bidirectional long-short-term memory (bi-LSTM) networks, a type of AI model able to learn long-term dependencies. As such, “teaching” them to recognize utterances of new domains or classify known domains more accurately requires high-quality training data, which in turn requires slow and expensive human annotation.

Apple Siri

The paper’s coauthors advocate what they describe as an active learning technique. Using an ensemble of classifiers, they identify incorrectly labeled data samples near the decision boundary — the region of a problem space in which the output label is ambiguous — learned by the existing classifier, so that including correctly labeled versions sharpen the boundaries learned on the new data.

Siri already allows for the discovery of prediction errors from bug reports, quality assurance team testing, and from user actions that imply requests weren’t correctly interpreted (e.g., quickly pausing a song or quitting an app launched by Siri), but the researchers note that these errors are relatively few in number. To expand the set of hypothesized prediction errors, they propose finding examples in a pool of unlabeled data similar to the confirmed prediction errors and having human annotators assign the correct labels and add the results to the original training data. In practice, an error in which the utterance “Spell ‘volume’” was matched with the Settings domain would surface examples of utterances similarly misclassified as Settings, like “Increase volume” and “What is the volume.”

In the first of two experiments, the paper’s coauthors compiled a corpus consisting of 850,000 randomly selected utterances from a development set previously used to debug Siri, plus 20,000 labeled training data obtained with their method. They used it to train the Domain Chooser, which after 11 tests showed an 8.88% error rate reduction with only 2.3% of the original data swapped out for the new examples.

Apple Siri

“In this paper, we have proposed a simple but effective method for efficient discovery of useful training data for a domain chooser classifier, as part of [Siri],” the researchers wrote. “The method produces … better quality data … [which] reduces the time taken fo human annotation … Although developed an tested in the setting of a commercial intelligence assistant, the technique is widely applicable.”

After a lengthy period of stagnation, Apple has redoubled efforts to expand Siri’s capabilities while improving the assistant’s overall performance. A contractor speaking to The Guardian this week said that the company is working on future updates that’ll enable Siri to “have a back-and-forth conversation about health problems” and offer integrated machine translation functionality. Separately, Apple is in the process of expanding Siri’s functionality in iOS 13, giving the Apple Watch more robust access to the digital assistant’s Shazam, Find My, and App Store-hunting features and enabling a HomePod smart speaker to begin playback based on a verbal command initiated on another Siri device.



READ SOURCE

This website uses cookies. By continuing to use this site, you accept our use of cookies.