Cloud

What is Azure Database for PostgreSQL?


Many organisations use PostgreSQL in their infrastructure to run mission-critical database workloads, but the open-source relational database can also run on the Microsoft Azure cloud service.

Azure Database for PostgreSQL is a managed implementation of the database service running on Microsoft’s cloud infrastructure. The aim is to let organisations quickly and easily develop applications using PostgreSQL, as well as native PostgreSQL tools, drivers and libraries, without worrying about having to manage and administrate the instances themselves.

There are two options for deployment: Single Server and Hyperscale (Citus) – although the latter service is currently in preview mode pending a full official release.

Azure Database for PostgreSQL – Single Server

The primary method of deployment for PostgreSQL databases on Azure is via the single-server model. Those who are already familiar with PostgreSQL in on-premise environments will feel right at home, as the configuration is almost identical; users can spin up a single PostgreSQL server which acts as the main host for multiple databases. The main differences are that this PostgreSQL service is fully managed, offers a four-nines performance guarantee, and includes built-in backup and encryption features.

In the single-server option, developers can host a single database per server in order to maximise resource usage, or share server resources among multiple databases. Like most of Microsoft’s cloud tools, management and configuration can be done via the Azure portal or the Azure CLI.

Azure Database for PostgreSQL admins do not have full superuser permissions, however; the highest privileged user role available to users of the service is the azure_pg_admin role. Instead, superuser attributes are assigned to the azure_superuser, which belongs to the managed service. Service users do not have access to this role or its associated privileges

Azure Database for PostgreSQL – Hyperscale (Citus) (preview)

If you have a very large (100GB-plus) database that requires maximum performance, you may be interested in Azure’s newest PostgreSQL deployment model. Officially dubbed the ‘Hyperscale (Citus) (preview)’ hosting type, this method uses technology from Citus Data, a company that Microsoft acquired in January 2019.

Hyperscale (Citus) uses database sharding technology, which splits data into smaller component parts and distributes them across a large number of compute nodes which are grouped together into a cluster. This cluster offers more storage capacity and CPU utilisation than a standard single-server PostgreSQL deployment would be able to.

Big companies like Facebook and Google use database sharding within their data centres, but one of the advantages of Hyperscale (Citus) is that sharding is handled automatically, without the tenant application needing to be taught how to do it. The system parallelises SQL queries and other operations across available servers, with a central ‘coordinator node’ handling query routing, aggregation and planning, and ‘worker nodes’ storing data.

When the coordinator receives a request from the application, it routes the query to the relevant worker node(s), depending on where the data in question is stored. Caveat emptor, however; as Hyperscale (Citus) is in public preview, it does not offer an SLA at the time of writing.

Pricing

There are three different pricing tiers for the single server version of Azure Database for PostgreSQL – basic, general purpose, and memory optimised – but there are a number of additional factors which will influence your monthly bill. This includes the amount of compute capacity you can use (measured in virtual cores, or ‘vCores’), the amount of storage used, and the backup capacity your deployment consumes. The number of databases per server has no direct impact on the prices

You can tweak certain elements of your servers’ configuration after they’re created, such as increasing the number of vCores, amount of storage and backup retention period, as well as switching between the general purpose and memory optimised pricing tiers.

Understanding the tiers

Azure Database for PostgreSQL’s pricing tiers are delineated by a number of different factors, including the maximum number of vCores per server, the maximum memory allocation per vCore and the type of storage on offer. You can find details of the differences between the various tiers in the table below.

Pricing tier

Basic

General purpose

Memory optimised

Compute generation

Gen 4, Gen 5

Gen 5

vCores

1, 2

2, 4, 8, 16, 32, 64

2, 4, 8, 16, 32

Memory per vCore

2GB

5GB

10GB

Storage capacity

5GB – 1TB

5GB – 4TB

Storage type

Azure Standard Storage

Azure Premium Storage

Backup retention period

7 – 35 days

Basic

The entry-level tier is primarily designed for low-priority workloads that don’t require a great deal of performance. This can include test/dev environments or intermittently-accessed applications.

General Purpose

As the name suggests, the general purpose tier is where most workloads will naturally fall, and encompasses most enterprise PostgreSQL use-cases. It offers a balance of performance and economy.

Memory Optimised

The highest tier is reserved for applications like financial transaction databases or analytics engines where low latency is paramount. For this reason, it makes heavy use of in-memory computing.

While the Basic tier does not provide an IOPS guarantee, other tiers have IOPS scale with the provisioned storage size in a 3:1 ratio.

Benefits of Azure Database for PostgreSQL

The database cloud service has a number of advantages.

Built-in high availability: The service provides built-in high availability with no additional setup, configuration or extra cost. This means there is no need to set up further VMs and configure replication to guarantee high availability for a PostgreSQL database.

Security: All data including backups are encrypted on disk by default. Also, the service has SSL enabled by default, so all data in-transit is encrypted.

Scalability: The service allows users to scale compute on the fly without application downtime in one step.

Automated backups: Users do not need to independently manage storage for backups. The service offers up to 35 days retention for automated backup.



READ SOURCE

This website uses cookies. By continuing to use this site, you accept our use of cookies.