One great addition to the SQL database PaaS service in Azure is the ability to use Elastic Database Pools to share capacity between Azure SQL Databases in order to make best use of the capacity at hand and to lower total costs. I will highlight the benefits and scenarios and what to think about when setting it up rather than making a “Getting Started with Elastic Database Pools”. If you want help on how to get started then I suggest this link.
Elastic Databases makes it possible to share database capacity between different databases. You can compare it to adding resources to a shared server rather than (like the normal Azure SQL database) assign resources to individual databases. Of course this PaaS service comes with the ability to scale up/down as most PaaS services do (meaning that you are not locked to a specific database capacity). It is just as easy to change as always, through the portal, PowerShell and APIs.
Understanding DTUs and eDTUs?
This article assumes that you are familiar with the term DTUs. The DTUs are a bit abstract but determines the relative horsepower of the database in comparison with others. It does not match a certain number of operations/second but rather a comparison between the different instance scales (if you are new to DTUs you can find some useful info here). So 5 DTUs is the smallest version and should be used for very low usage. A 20 DTU Database is 4 times more capable than the 5 DTU database.

If you want to know how many DTUs your on-premise database will neeed if you were to put it in Azure SQL database uses you can get some help at dtucalculator.azurewebsites.net.
EDTUs on the other hand is the equivalent of DTUs for elastic database pools, i.e. the DTUs are not assigned to a single database but rather for the shared capacity in an elastic database pool.
A typical database scenario
Let’s consider the following scenario, you have 10 Basic (standard non-elastic) databases which during normal working hours use 5 DTUs (Database Transaction Units) each. When you set them up in Azure you assign a growth/peak/extraordinary margin to cover peaks so you assign each a scale of 10 DTUs (because that is the step above 5DTUs). You are now left with a scenario that is scaled to 10 databases á 10 DTUs = 100 DTUs, but most often only need 50 DTUs (if non of them are peaking). This is of course reflected in your Azure consumption/bill also. So with these margins you normally have an overcapacity of up to 50 DTUs.

Elastic Database Pools
Configuring Elastic Database pools solves this issue and others by having a shared eDTU capacity assigned to a number of databases. In the scenario described above your desired scale would be 10 databases á 5 DTUs (because this is the normal used capacity) and a small shared peak margin – perhaps 10 DTUs in total (because it is unlikely that all your instances are peaking at the same time). So your ideal configuration would likely be something like below with 60 DTUs.

This is what you do with elastic databases – you set up a shared DTU capacity for several databases. You can potentially save a lot of money by sharing the capacity between the databases as your DTU need is the maximum DTUs used by all databases. In the figure above these databases can share a common peak/extra capacity.
Combine the right databases
The example just described is of course very unlikely because the likelyhood of all databases behaving identical is very low and neither does it represent the best scenarios for elastic database pools. It is rather when you combine databases that consume capacity in different hours of the day, such as the shipping application is very busy at 16:00 to deliver the goods to the shipping bay, while invoicing may run once every week and yet another database runs a heavy job during nights when all other systems are hardly used at all. You can then use the DTU capacity differently at different times of the month/week/day and focus it where needed.
So rather than the above database usage you will benefit more from combining databases with the usage like the case scenario below.

If we look closer at the above scenario-these 4 databases co-existently would use a bit less than 80 DTUs at max capacity.
So now we have the following options:
Option | Scale | Cost/Month |
---|---|---|
Elastic Scale DTU | 1 Standard 100 eDTU | €188.22/month |
4 Single DBs | 3 Standard S2 50 DTU + 1 Standard S3 100 DTU | €316.21/month |
As with the normal SQL Database DTUs, eDTU scales comes in predefined sizes so you might not be able to assign the exact DTU number that you require. eDTUs are also somewhat more expensive than the normal DTUs but that extra cost would in most cases be a fraction of the savings you will make. The example above is almost twice as expensive in standalone SQL Databases as using elastic database pools. This is due to the fact that the elastic database option offers in total 100 eDTUs while (this still allows for 20 eDTU peaks and extreme conditions compared to what is needed). The stand-alone DB solution has an extreme over-capacity (i.e. av total of 250 DTUs when only 80 DTUs was really needed at any point). However we could not have dimensioned these servers lower to handle the peaks.
So how should I scale it?
First of all – sometimes Azure will suggest elastic pool configurations for you. I have however not received any of those but that could be based on my databases.
Another way is to look at the peaks of your databases, find suitable databases with different usage for you. You can easilly look at individual database usage in the new Azure Portal.

You can then set an initial scale when creating the pool. When you have the pool up and running – you can easilly see how much capacity you use in the elastic pool as a whole to.
You can also get some help when you add databases to a pool. As you can see Peak DTU and Average DTUs is displayed in the list when selecting databases.

You can also easily change the scale at any time to increase/decrease the capacity if you notice that you have set it up too high or too low. Also read about monitoring further down.
Important things to consider
Among the most important things to know about the elastic database pools is that there are limits on how much/little eDTUs you can use for the individual databases. For example, for the basic pools you have a max usage by a single database set to 5 DTUs (so it does not matter if you have 100 eDTUs available in the pool, your individual database can only utilize 5.
Elastic Database Pools comes in the classic Basic, Standard, Premium tiers that packs different capacities. Below comes a summary (based on stats on Aug 24th 2016…), you can review the current capacities and other configuration parameters/constraints yourself at Azure pricing or https://azure.microsoft.com/documentation/articles/sql-database-elastic-pool/.
Param | Basic | Standard | Premium |
---|---|---|---|
Max eDTUs per pool | 100-1200 | 100-1200 | 125-1500 |
Max eDTUs per database | 5 | 10-100 | 125-1000 |
Min eDTUs per database | 0.5 | 0-100 | 0-1000 |
Max number of databases in the pool | 200-400 | 200-400 | 50 |
Max Storage per database | 2 GB | 250 GB | 500 GB |
As you can see the Basic tier is aimed for those who have many very small databases. Standard allows you to have larger databases and utilize more capacity in a single database when needed (consider the night batch scenario at low usage hours). The premium tier is for heavily used databases. It is important that you consider other “normal” limitations like database storage size, restore options etc also in your choice of size.
Can I use this for tenant based databases?
Yes – you can but consider that if all your customers are peaking at the same time you still have to have a quite high eDTU assignment to cover peaks in all the different databases. So you will likely have a higher degree of savings if you combine databases with different peak times or have customers in different time zones*.
* Note: You probably rather want to set up separate Elastic Database Pools in the different regions if you have customers in different Azure regions.
So when shall I not use Elastic Database Pools?
It is hard to say with certainty that a particular scenario always will be bad for Elastic Database Pools. But I will mention some cases that may not be ideal.
For example – a mission critical system with a constant consumption and priority over others may not be a good candidate for Elastic Database Pools. Here you probably want to dedicate resources to reserve your system the needed capacity.
Likewise in a tenant based situation like described in the previous question with peaks around the same times MAY not be ideal to take advantage of the functionality. I will present a new scenario below to illustrate this. This solution consumes just as many DTUs as the solution as in section “Combine the right databases”.

From the diagrams we notice that these systems peak approximately the same time. So what happens if i try to select a suitable configuration in this scenario then? I now have a max consumption around 120 DTUs at 14:00 to 16:00. This means that I would have to choose a Standard 200 eDTU instead.
Option | Scale | Cost/Month |
---|---|---|
Elastic Scale DTU | 1 Standard 200 eDTU | €376.45/month |
4 Single DBs | 4 Standard S2 50 DTU | €252.96/month |
Also you may have to few databases. You may in some extreme cases get a more cost efficient solution with standalone databases.
So what if I want to remove a database from the pool, is that possible?
Yes it is. Just as you can Add databases you can remove them from the pool with the Remove From Pool button.

How do I maintain/monitor the pool after I have set it up?
You can view the utilization from the Azure portal and you can set up alerts to warn you when maximum capacity near or have been reached. You can read more about this at https://azure.microsoft.com/sv-se/documentation/articles/sql-database-elastic-pool-manage-portal/
Hi, Is the pool price fixed? What I mean is if you have 4 databases with no activity at all for a month, would Azure still charge that price?
LikeLike
You pay for the elastic capacity per hour. Meaning that if you have reserved a capacity and your elastic pool is running/is active = pay (see https://azure.microsoft.com/en-us/pricing/details/sql-database/), the actual capacity used used does not matter. The Elastic Pools are a step forward compared to the standard databases where each database forced you to pay for each active hour, with elastic you can decrease your costs if you plan cleverly and share capacity. But if you have absolutely no activity at all for a month and then do a lot of processing for a short time, you could backup the database(s) and decommission the pool it between usages and restore it between active times. You could automate this with powershell and for example Azure Automation.
LikeLike