Cloud

Microsoft Azure AI gains new LLMs, governance features


Microsoft today at its annual Build conference introduced several updates to Azure AI, the company’s cloud-based platform for building and running AI applications. Azure AI competes with similar offerings from rival cloud providers such as AWS, Google, and IBM.

The updates include the addition of new governance features, new large language models (LLMs), and Azure AI Search enhancements. Microsoft also announced that it is making Azure AI Studio generally available.

Azure AI Studio, a generative AI application development toolkit that competes with the likes of Amazon Bedrock and Google Vertex AI Studio, was introduced in a preview in November of last year.

In contrast to Microsoft’s Copilot Studio offering, which is a low-code tool for customizing chatbots, Azure AI Studio is aimed at professional developers, allowing them to choose generative AI models and ground them with retrieval augmented generation (RAG) using vector embeddings, vector search, and their own data sources.

Azure AI Studio can also be used to fine-tune models and create AI-powered copilots or agents.

New models added to Azure AI

As part of the updates to Azure AI, Microsoft is adding new models to the model catalog inside Azure AI Studio, bringing the number of models available to more than 1,600.

The new models include OpenAI’s GPT-4o, showcased this week. Earlier in May, Microsoft enabled GPT-4 Turbo with Vision through Azure OpenAI Service. “With these new models developers can build apps with inputs and outputs that span across text, images, and more,” the company said in a statement.

Other models that have been added via Azure AI’s Models-as-a-Service (MaaS) offering include TimeGen-1 from Nixtla and Core42 JAIS, which are now available in preview. Models from AI21, Bria AI, Gretel Labs, NTT Data, Stability AI, and Cohere Rerank are expected to be added soon, Microsoft said.

Further, Microsoft is updating its Phi-3 family of small language models (SLMs) with the addition of Phi-3-vision, a new multimodal model that is expected to become available in preview.

In April, Microsoft had introduced three Phi-3 models—the 3.8-billion-parameter Phi-3 Mini, the 7-billion-parameter Phi-3 Small, and the 14-billion-parameter Phi-3 Medium—to support resource-constrained environments for on-device, edge, and offline inferencing and be more cost-effective for enterprises.

Microsoft’s Phi-3 builds on Phi-2, which could understand 2.7 billion parameters while outperforming large language models up to 25 times larger, Microsoft said at the time of the launch. Phi-3 Mini is currently generally available as part of Azure AI’s Models-as-a-Service offering.

Other modules of Azure AI were also updated, including Azure AI Speech, which now includes features such as speech analytics and universal translation to help developers build applications for use cases requiring audio input and output. The new features are available in preview.

Back in April, Microsoft had updated its Azure AI Search service to increase storage capacity and vector index size at no additional cost, a move it said will make it more economical for enterprises to run generative AI-based applications.

Azure AI gets new governance, safety features

At Build 2024 Microsoft also introduced new governance and safety features for Azure AI, with the company updating its model output monitoring system, Azure AI Content Safety.

The new feature, named Custom Categories, is currently in preview and will allow developers to create custom filters for specific content filtering needs. “This new feature also includes a rapid option, enabling you to deploy new custom filters within an hour to protect against emerging threats and incidents,” Microsoft said.

Other governance features added to Azure AI Studio and Azure OpenAI Service include Prompt Shields and Groundedness Detection, both of which are in preview.

While Prompt Shields mitigate both indirect and jailbreak prompt injection attacks on LLMs, Groundedness Detection checks generative AI applications for ungrounded outputs or hallucinations in generated responses.

Microsoft said that it currently has 20 responsible AI tools with more than 90 features across its offerings and services.

In order to secure generative AI applications, Microsoft said that it was integrating Microsoft Defender for Cloud across all of its AI services. “Threat protection for AI workloads in Defender for Cloud leverages a native integration with Azure AI Content Safety to enable security teams to monitor their Azure OpenAl applications for direct and in-direct prompt injection attacks, sensitive data leaks, and other threats so they can quickly investigate and respond,” the company said.

In order to bear down on security further, enterprise developers can also integrate Microsoft Purview into their developed applications and copilots with the help of APIs, according to Jessica Hawk, corporate vice president of data, AI, and digital applications at Microsoft.

This will help developers and copilot customers to discover data risks in AI interactions, protect sensitive data with encryption, and govern AI activities, Hawk added.

These capabilities are available for Copilot Studio in public preview and will be available in public preview for Azure AI Studio in July via the Purview SDK.

Other security updates include integration of what Microsoft calls “hidden layers security scanning” into Azure AI Studio to scan every model for malware.

Another feature, called Facial Liveness, has been added to the Azure AI Vision Face API. “Windows Hello for Business uses Facial Liveness as a key element in multi-factor authentication (MFA) to prevent spoofing attacks,” Hawk explained.

Copyright © 2024 IDG Communications, Inc.



READ SOURCE

This website uses cookies. By continuing to use this site, you accept our use of cookies.