Lance England

Quicksilver Project - Database

This post is part of a series of getting hands-on practice on a DevOps project:

Data will be stored in an relational database, specifically an Azure SQL Database. The data model should be fairly stright-forward. My initial model will be intentionaly bare-bones so I can implement a CI/CD pipeline and then add more to it.


Customer - id, first, last, email. The person requesting a delivery.

Package - id, name, size, weight. The item to be delivered.

Delivery - id, address, geocode. The destination for the package.

Courier - id, first, last, email. The person delivering the package.

Future work will include events (e.g. picked up, out-for-delivery, delivered).


Before I get started on design and coding, I have some questions that I will hopefully answer as I’m working through it.

Getting Started


First, I opened VSCode and made sure the SQL Database Projects extension was installed. Then opened the Command Pallette (Ctrl+Shift+P) and choose “Database Projects: New” which then gave an option of Azure SQL Database or SQL Server. I’m not exactly sure what the difference means in terms of the extension functionallity, probably the deployment method(?).

Right-clicking he extension toolbar gives options for New Table, New Stored Procedure, etc. so this looks familiar enough to quickly stub out a handful of tables so I can move to the deployment step.


I’m going to try this multiple ways, each time getting closer to the goal of CI/CD pipeline deployment.

From VSCode

First, I’ll manually create an Azure SQL Database through the portal and deploy the code from VSCode, to verify the DACPAC deploys as expected.

Azure SQL Databases now have a free tier perfect for these types of learning excercises. I icked my Entra ID/Azure AD account as the administrator. After the database was created, I set the Server Firewall rule to allow y client IP address.

In VSCode, I choose ‘Publish’ and it launched a basic wizard in the command pallette area, asking for fully-qualified server name, database name, and authentication method. It also allowed me to save a publish profile. Everything worked, so on to the next deployment method.

From Azure DevOps Pipeline

From the Azure portal, I dropped/re-created a new empty database. Note: In order to apply the free offer again, you have to delete the logical server too.

Next, I created a new project in Azure DevOps. Back in VSCode, I used the Git extension to initialize a repo. Under Repos in Azure DevOps, I got the URL of the repo and added it as a Remote for my local git repo (the database project) and published it up to Azure Repos.

Next step, creating the pipeline and setting it up to publish to Azure SQL Database. First challenge, getting the build path correct. This took multiple attempts, including using echo $() to see the path and finding this article on Building Git Repos that mentions the default checkout location of /s. Finally was able to build it with the DotNetCoreCLI@2 task.

Moving on to deployment, at time of writing (October 2023) the SqlAzureDacpacDeployment@1 task only works on Windows agents, so be sure your YAML pipeline targets vmImage: windows-latest.

Azure DevOps needs a way to deploy to your Azure sunscription, so first you need to set up an Azure Resource Manager Service Connection. The first option is selecting the authentication type that Azure DevOps (AdO) will use when connecting to the Azure subscription. There are six (6) options, and I only have knowledge of two (2) of them, so that is another area to learn more. I choose *Service principle (Automatic) in which AdO will create a Service Principle in the related Azure AD aka Entra ID instance. It allows you pick the scope between Management Group, Subscription, and Resource Group, and assigns the Contributor RBAC role.

My first decision point before creating the pipeline was how much Infrastructure-as-Code to automate. Should I try to automate the creation of the Azure SQL Database/logical server instance in the pipeline? When learning something new, I tend to go for the lowest “friction” route at first; I’d prefer to get a minimum-viable concept working, and can improve/productionize it later. So, I decided to create the Azure SQL Database manually.

Next step, SQL Database permissions. The Subscription-level RBAC Contributor role is for management-plane permissions. In order to allow the AdO service principle to create objects inside the database, it must have some level of SQL Server permissions, which I assigned using T-SQL statements after connecting SQL Management Studio.

Note: In the Azure portal, browse to the SQL Database and choose “Set Firewall Rules” and choose the option to allow your client IP address.

I created a database user mapped to a server login, then assign the role of db_owner. In the future, this could be reduced to some combination of ddl_admin, db_datawriter, etc. However, if you plan on defining custom database roles you’’ need to have db_owner or db_securityadmin.

To build a pipeline to deploy the SQL Database Project, I used two tasks:

It took a multiple attempts to get the file paths figured out. But these are exactly the type of things you just have to fumble through and learn. I used the system-variable Agent.BuildDirectory and through runing a script task DIR, I discovered my repo code was under /s. After the fact I discovered a different system-variable, Build.Repository.LocalPath, that gives the path to the repo.

After some trial and error, I was able to build and deploy, hurrah! My YAML pipeline (at this point):

- main

  vmImage: windows-latest

- task: DotNetCoreCLI@2
    command: 'build'
    projects: '**/*.sqlproj'

- task: SqlAzureDacpacDeployment@1
    azureSubscription: 'Visual Studio Enterprise Subscription – MPN(guid redacted)'
    AuthenticationType: 'servicePrincipal'
    ServerName: ''
    DatabaseName: 'QuicksilverDb'
    deployType: 'DacpacTask'
    DeploymentAction: 'Publish'
    DacpacFile: '$(Build.Repository.LocalPath)\s\Quicksilver\bin\Debug\Quicksilver.dacpac'
    IpDetectionMethod: 'AutoDetect'

Future Steps

18 Oct 2023 Permalink data automation