Cloud

Microsoft’s Kubernetes for the rest of us


Building and managing a Kubernetes infrastructure in the cloud can be hard, even using a managed environment like Azure Kubernetes Service (AKS). When designing a cloud-native application, you need to consider the underlying virtual infrastructure, and provision the right class of servers for your workload and the right number to support your predicted scaling. Then there’s the requirement to build and manage a service mesh to handle networking and security.

It’s a lot of work that adds several new devops layers to your development team: one for the physical or virtual infrastructures, one for your application and its associated services, and one to manage your application platform, whether it’s Kubernetes or any other orchestration or container management layer. It’s a significant issue that negates many of the benefits of moving to a hyperscale cloud provider. If you don’t have the budget to manage Kubernetes, you’re unable to take advantage of most of these technologies.

Cloud-native can be hard

There have been alternatives, building on back-end-as-a-service technologies like Azure App Service. Here you can use containers as an alternative to the built-in runtimes, substituting your own userland for Azure’s managed environment. However, these tools tend to be focused on building and running services that support web and mobile apps, not the messaging-driven scalable environments needed to work with Internet of Things or other event-centric systems. Although you could use serverless technologies like Azure Functions, you don’t have the ability to package all the elements of an application or work with networking and security services.

What’s needed is a way to deliver Kubernetes services in a serverless fashion that allows you to hand over the operations of the underlying servers or virtual infrastructure to a cloud provider. You can build on the provider’s infrastructure expertise to manage your application services as well as the underlying virtual networks and servers. Where you would have spent time crafting YAML and configuring Kubernetes, you’re now relying on your provider’s automation and concentrating on building your application.

Introducing Azure Container Apps

At Ignite, Microsoft introduced a preview of a new Azure platform service, Azure Container Apps (ACA), that does just that, offering a serverless container service that manages scaling for you. All you need to do is bring your packaged containers, ready to run. Under the hood, ACA is built on top of familiar AKS services, with support for KEDA (Kubernetes-based Event-Driven Autoscaling) and the Envoy service mesh. Applications can take advantage of Dapr (Distributed Application Runtime), giving you a common target for your code that lets application containers run on both existing Kubernetes infrastructures as well as in the new service.

Microsoft suggests four different scenarios where Azure Container Apps might be suitable:

  • Handling HTTP-based APIs
  • Running background processing
  • Triggering on events from any KEDA-compatible source
  • Running scalable microservice architectures

That last option makes ACA a very flexible tool, offering scale-to-zero, pay-as-you-go tools for application components that may not be used much of the time. You can host an application across several different Azure services, calling your ACA services as and when they’re needed without incurring costs while they’re quiescent.

Costs in the preview are low. Here are some numbers for the East US 2 region:

  • Requests cost $0.40 per million, with the first 2 million each month free.
  • vCPU costs are: active at $0.000024 per second and idle at $0.000003 per second.
  • Memory is also priced per GB per second: $0.000003 per second for both active and idle containers.
  • There’s a free grant per month of 180,000 vCPU-seconds and 360,000 GB-seconds.

All you need to use Azure Container Apps is an application packaged in a container, using any runtime you want. It’s much the same as running Kubernetes, with containers configured to install with all your application dependencies and designed to run stateless. If you need state, you will have to configure an Azure storage or database environment to hold and manage application state for you, in line with best practices for using AKS. There’s no access to the Kubernetes APIs; everything is managed by the platform.

Although there are some similarities with Azure Functions, with scale-to-zero serverless options, Azure Container Apps is not a replacement for Functions. Instead, it’s best thought of as a home for more complex applications. Azure Container Apps containers don’t have a limited lifespan, so you can use them to host complex applications that run for a long time, or even for background applications.

Getting started with Azure Container Apps

Getting started with Azure Container Apps is relatively simple, using the Azure Portal and working with ARM templates or programmatically via the Azure CLI. In the Azure Portal, start by setting up your app environment and associated monitoring and storage in an Azure resource group. The app environment is the isolation boundary for your services, automatically setting up a local network for deployed containers. Next create a Log Analytics workspace for your environment.

Containers are assigned CPU cores and memory, starting with 0.25 cores and 0.5GB of memory per container, up to 2 cores and 4GB of memory. Fractional cores are a result of using shared tenants, where core-based compute is shared between users. This allows Microsoft to run very high-density Azure Container Apps environments, allowing efficient use of Azure resources for small event-driven containers.

Containers are loaded from the Azure Container Registry or any other public registry, including Docker Hub. This approach allows you to target Azure Container Apps from your existing CI/CD (continuous integration and continuous delivery) pipeline, delivering a packaged container into a registry ready for use in Azure Container Apps. Currently there’s only support for Linux-based containers, although with support for .NET, Node.js, and Python, you should be able to quickly port any app or service to an ACA-ready container.

Once you’ve chosen a container, you can choose to allow external access for HTTPS connections. You don’t need to add and configure any Azure networking features, like VNets or load balancers; the service will automatically add them if needed.

Using the Azure CLI to work and scale with Dapr

More complex applications, like those built using Dapr, need to be configured through the Azure CLI. Working with the CLI requires adding an extension and enabling a new namespace. The service is still in preview, so you’ll need to load the CLI extension from a Microsoft Azure blob. As with the portal, create an Azure Container Apps environment and a Log Analytics workspace. Start by setting up a state store in an Azure Blob Storage account for any Dapr apps deployed to the service, along with the appropriate configuration YAML files for your application. These should contain details of your application container, along with a pointer to the Dapr sidecar that manages application state.

You can now deploy your application containers from a remote registry using a single line of code to add it to your resource group and enable any Dapr features. At the same time, configure a minimum and maximum number of replicas, so you can manage how the service scales your apps. Currently you’re limited to a maximum of 25 replicas, with the option of scaling to zero. It’s important to remember that there is a start-up time associated with launching any new replica, so you may want to keep a single replica running at all times. However, this will mean being billed for using that resource at Azure Container Apps’s idle charge.

You can then define your scale triggers as rules in JSON configuration files. For HTTP requests (for example when you’re running a REST API microservice), you can choose the number of concurrent requests an instance can service. As soon as you go over that limit, Azure Container Apps will launch a new container replica until you reach your preset limit. Event-driven scaling uses KEDA metadata to determine what rules are applied.

Choose the name of the event used to scale your application, the type of service you’re using, and the metadata and trigger used to scale. For example, a message queue might have a maximum queue length, so when the queue reaches its maximum length, a new container replica is launched and attached to the queue. Other scaling options are based on standard Kubernetes functions, so you can use CPU utilization and memory usage to scale. It’s important to note that this is only a scale-out system; you can’t change the resources assigned to a container.

Kubernetes made simpler

There’s a lot to like here. Azure Container Apps goes a long way to simplify configuring and managing Kubernetes applications. By treating a container as the default application unit and taking advantage of technologies like Dapr, you can build applications that run both in standard Kubernetes environments and in Azure Container Apps. Configuration is simple, with basic definitions for your application and how it scales, allowing you to quickly deliver scalable, cloud-native applications without needing a full devops team.

Azure began its life as a host for platform-as-a-service tools, and Azure Container Apps are the latest instantiation of that vision. Where the original Azure App Service limited you to a specific set of APIs and runtimes, Azure Container Apps has a much broader reach, providing a framework that makes going cloud native as simple as putting your code in a container.

Copyright © 2021 IDG Communications, Inc.



READ SOURCE

This website uses cookies. By continuing to use this site, you accept our use of cookies.