Hello there. Hopefully you’ve just completed the first lab of the Azure for beginners series on getting started with VMs, and you enjoyed it so much, that you’ve come straight to the second one. Or perhaps you clicked it by accident. Either way, you’re here, so it’s time to learn about Azure SQL!
Azure SQL is the managed database-as-a-service offering on Azure for SQL relational databases. Essentially it abstracts you away from having to worry about setting up and maintaining your own servers to run SQL on, and instead gives you the ability to spin up new databases quickly and with little overhead. In this lab, we’ll go through how to set up an Azure SQL database, populate the DB with sample data, query it and add to it with Data Explorer, scale up its performance, create an elastic pool, automatically optimise it and make it geo-redundant. Phew.
Let’s get started.
Exercise 1 – Create an Azure SQL database with the Azure CLI
Right, let’s create a database that we can work with, and we’ll do this with the Azure CLI (Command Line Interface). In my last lab I showed you how to use the Azure CLI from the Cloud Shell directly in the browser, so to shake things up I’ll show you how to use it locally on your machine instead.
Installing the Azure CLI locally: depending on if you’re using a Mac, a Windows machine or something else, the installation will be slightly different. The documentation for the CLI 2.0 installation here will have you covered.
Once it’s installed, we’re ready to begin.
Step 1 – Create a resource group for our deployment
So, as with any deployment of resources, as we learned in the last lab, we need a resource group as the logical container for everything we create. Open up your command line and enter in the following to invoke the Azure CLI and log into your Azure account:
Before any command using the Azure CLI, we put
az. You’ll now be prompted to visit the device log in page and enter in the code provided to authenticate. Once you’ve done that you’ll get your subscription details:
Now we can type in the following command to create a resource group:
az group create --name SQLRG --location westeurope
You’ll receive the following confirmation once created:
Step 2 – Create a SQL Database logical server
Next, we need to create what’s called a logical server, which is used to contain a bunch of databases managed as a group, and is what we’ll nest our database inside. Use the following command (feel free to change the user and password parameters):
az sql server create --name SQLTestLogicalServer --resource-group SQLRG --location westeurope --admin-user serverAdmin --admin-password extremelySecurePass18
Once that’s cooked you’ll get the following printout:
Step 3 – Create a firewall rule to allow other Azure resources to communicate
Now that we’ve created our logical server, we need to create a firewall rule that will allow other Azure resources to communicate with our SQL database when it’s created. Type in the following command:
az sql server firewall-rule create --resource-group SQLRG --server sqltestlogicalserver -n AllowTestIp --start-ip-address 0.0.0.0 --end-ip-address 0.0.0.0
If you want to open up the SQL database to all traffic, you change the end IP to 255.255.255.255, but for the purposes of this lab we’ll just allow Azure traffic. Once that’s been created you’ll get this:
Step 4 – Let’s create the database
Okay, now it’s finally time to create the database in our logical server.
We can select from various performance levels for our databases, which affects things like max IO throughput, CPU and max storage size. There’s also the concept of a Database Transaction Unit (DTU), which is used by Microsoft to describe the amount of dedicated resources/power that will be assigned to a database, and this varies by tier and other options. For a full overview, check out the documentation.
We will use the S0 (standard) performance level for our database, and we’ll also be using the test dataset available (called ‘AdverntureWorksLT’), so enter the following command to provision the database:
az sql db create --resource-group SQLRG --server sqltestlogicalserver --name SQLTestDB --sample-name AdventureWorksLT --service-objective S0
That very long list of details means it’s been provisioned successfully. Now time to query the test data.
Exercise 2 – Querying our database in the portal
Now we’re going to see how simple it is to query the data in our database using the tools available in the Azure portal, before we look at other ways of interacting with the data.
Step 1 – Authenticate to Data Explorer
Okay, let’s head to the portal to see if the Azure CLI is telling the truth and our resources have been created. Load up portal.azure.com (and log in if you aren’t already authenticated) and type SQLTestDB into the search bar at the top:
Hopefully the database we created will show up, in which case give it a click. After being presented with the DB dashboard, give Data Explorer a click on the left-hand side:
As shown above, it’ll require you to log in before we can query the data, so give Login a click at the top. Leave the method selected as SQL server authentication and the Login should be prepopulated with serverAdmin (unless you changed this in Exercise 1), so all you need to do is enter in your password, which if you didn’t change it should be extremelySecurePass18:
Then press OK. If your credentials are correct you’ll be authenticated and your view should be like this:
Step 2 – Perform a simple query
Right, now we’re logged in, we can query the data. As suggested by the database name, we query using SQL commands. To get the top 10 products by category out of the test dataset, paste in the following to the code window on the right under ‘Query 1’:
Once you’ve done that, click Run. You’ll see what we requested pop up in the results box. Pretty neat, huh?
Step 3 – Insert a new row
Now let’s insert a new row into the database. Copy the below code and paste it into the query window:
When you click Run this time, you’ll get a success message like this:
Step 4 – Update an existing row
And finally, let’s update the entry we’ve just made. £10 is far too expensive for Azure socks, so let’s halve the price. It’s only fair. Enter the query below:
This will update the row accordingly:
Nice and simple.
Exercise 3 – scaling the database to match demand
Now let’s pretend that after listing Azure Socks in our product line that demand has surged (as unlikely as that is), and our database is receiving more reads and writes than it can now handle because we set it up on the S0 tier. Not to worry, it’s incredibly easy to scale up on the fly, so let’s do that now.
Step 1 – Scale up the DTUs in the portal
First, navigate to Pricing tier on the left-hand side of the database dashboard:
Now drag up the slider for DTU up a notch to change our tier from S0 to S1, which will adjust our estimated monthly cost accordingly:
Click Apply and this will submit the database scale request. After a couple of minutes you should get a notification to say it’s successfully been scaled up.
Step 2 – Monitor and change the size of the database using Azure CLI
Okay, let’s switch back to our command window to use the Azure CLI again (if you closed off the window, open a new one and re-authenticate using
az login). Enter the following command to get the current usage for the database:
az sql db list-usages --name SQLTestDB --resource-group SQLRG --server sqltestlogicalserver
As you can see, with our current settings, we are massively under-consuming the database size we’ve provisioned. What shall we do? Well, let’s just shrink the database.
Enter the command below (when you use the create command on a resource that already exists, it will simply update it):
az sql db create --name SQLTestDB --resource-group SQLRG --server sqltestlogicalserver --max-size 100MB
That’ll downsize our database, which we can confirm by checking the usage again:
az sql db list-usages --name SQLTestDB --resource-group SQLRG --server sqltestlogicalserver
And there we go, a much better fit.
But what if we don’t want to do this manually? Especially if demand for our workload is unpredictable and can peak and trough without warning? In fact, what if we could configure, say, multiple databases to be part of a, well I guess you could call it a pool, and these could be used to stretch when workloads peak? If only there was such a feature…
Step 3 – Creating an Elastic Pool
Ah yes, Elastic Pools, that’d do it. We can add our database to an Elastic Pool of other databases, which will effectively allow them to share each other’s resources when they need them.
Elastic Pools enable you to pool together databases and purchase elastic Database Transaction Units (eDTUs) for them to share, in order to accommodate unpredictable periods of usage by individual databases. The eDTU requirement for a pool is determined by the aggregate utilization of its databases. More info from the documentation here.
Let’s switch back to the portal and navigate to the logical server we created to contain our database. Type SQLTestLogicalServer into the search bar at the top and click on the result:
This will bring up the dashboard for the SQL Server. Now click on New Pool at the top:
The Elastic pool pane will pop up. In the Name field, call it TestPool, leave the pricing tier as is, and to add our existing database to the pool we’re creating, click Configure pool:
Then in the Configure pool blade, click on Add database at the top:
And in the Add database blade that’s popped up on the right-hand side, tick our database and click Select at the bottom:
Then finally, click Select at the bottom left of the Configure Pool pane (you’ll see your database has been added on the right):
Now we can create the pool. Just before continuing, give Pin to dashboard a tick for ease of access later, then click OK.
This will begin the process to provision the Elastic pool and move over our database. When it’s cooked, you’ll get the Elastic pool dashboard pop up, which looks like this:
As you can see, we have options along the top to create another database, which will then allow the two databases to share eDTUs between them and better respond to peaking demand without us having to worry as much. There’s plenty more to elastic pools to learn, but for the purposes of this lab we’ll leave it there for now. Make sure to check out the documentation I linked earlier which has everything you need to know.
Exercise 4 – Optimising our database and configuring fail-over
So we’ve talked about how we’d scale to surging demand, but how do we make sure our database is performing as well as it can on a day-to-day basis, and also, that we have something to fail-over to if the database goes down for any reason? Well in this exercise we’ll go over how to set up automatic tuning and a secondary SQL logical server to fail-over to in the event of the primary server failing.
Step 1 – Enabling automatic tuning for our SQL Server & databases within
Azure SQL offers some nifty automatic tuning methods for us to make the most of, to ensure that our database is running at light speed. It can automatically create indexes that will speed up regular queries, as well as drop ones that aren’t needed, all by itself. More info here.
So let’s switch it on. Head to the SQL logical server dashboard again by typing in SQLTestLogicalServer into the portal search box, then click on Automatic tuning on the left-hand menu under ‘Support + Troubleshooting’:
Click on Azure defaults under ‘Inherit from’, and then flick ‘Force plan’, ‘Create index’ and ‘Drop index’ from Off to On:
Finally, click Apply. Due to the default inheritance on the databases within a logical server, settings applied at the logical server level will be inherited by its databases. So, our database is now optimised.
Step 2 – Setting up geo-replication for fail-over
Now that we’ve optimised our database, we want to make sure it’s highly available. This means making sure if in the rare event that the instance we’ve set up goes down, say if a datacentre was hit by a meteor, that a secondary database can pick up the slack from elsewhere.
On the same page you’re currently on, click on Failover groups on the left-hand menu for the logical server, which will bring up the Failover group menu. Click Add at the top, then in the blade that pops up, call the Failover group name sqltestfailover and click on Secondary server:
Then we’ll want to create new server to act as our backup. Click Create a new server and fill in the following parameters:
- Server name: set this to sqltestlogicalserver-backup
- Server admin login: for consistency call this serverAdmin
- Password: use the one we used before (extremelySecurePass18) or a new one, as long as you remember it!
- Location: set to any region that isn’t West Europe to ensure redundancy. It’s usually best practice to pick one that’s fairly close for low latency, but I’ve put East US, because why not, it’s a lab.
Click Create and then OK on the Server blade, which will take you back to the Failover group blade. Last thing to set up is the database replication, so click on Select databases to add:
In the Databases blade, click on Select all at the top if it isn’t already selected, which will add our Test Pool and associated database on the replication list. You’ll now be told that the Elastic pool doesn’t yet exist on our secondary server, so give that warning message a click:
On the Elastic pool blade, just click Create and then Select on the Databases blade to confirm our databases to be replicated.
Once it all looks like the above, you’re ready to click on the big blue Create on the Failover group blade. This will submit our secondary failover server to be created and the replication to begin. Once it’s done you’ll see it appear in the Failover groups list:
So there you have it, our database is now geo-redundant and optimised, all with the furious clicking of a few buttons.
That’s all folks…
In this lab, we went through a basic deployment of an Azure SQL database, and we configured it to be elastic, geo-redundant and automatically-optimised. Nicely done. Here’s what we learned:
- Provisioning Azure SQL databases with Azure CLI
- Using the Data Explorer to perform queries, adding new rows and updating data
- Using basic SQL commands
- Scaling up a database in the portal
- Using Azure CLI to monitor storage usage and update max-size
- Provisioning Elastic Pools and adding an existing database
- Enabling automatic optimisation on our DB
- Setting up geo-redundant failover for our DB logical server and its contents
Hope you enjoyed this lab and found it useful. As ever, please leave me some feedback in the comments, what you liked, what you didn’t and what was missing, and of course, requests for future lab topics. In my App Development on Azure series, I’ll be doing a lab on how to link an application with Azure SQL and work with the data, so look out for that in the near future!
Thanks again and take care.