Background

Docker was first up on the list of tech to have a play with in 2022 and it has been quite a revelation. I wasn’t exactly sure what Docker was all about but have discovered that it is a fantastic technology and that it will only grow in useage/popularity and will change the way in which we develop, deploy and run our applications.

I haven’t yet used it in anger in production, but it has been a great fit for me in the dev environment. The application I am currently working on has multiple databases, a background service and a web application. I’ve been able to very quickly Dockerise these and have everything run locally.

Most of my time is spent developing the web application. Docker has allowed me to launch the databases and background service and then develop the web application locally, connecting to the databases within containers. The databases used to be hosted in Azure and shared with others which sometimes led to some instability, and not to mention the faffing about with the firewall rules whenever my connection drops or recycles when tethered to a 4G sim.

This post…

All of the code is up on GitHub: cortexGG

This article will walk through:

  • Setting up a SqlServer container with a database
    • Scripting out the creation of a database
    • Seeding it with data
  • Setting up a dotnet container to launch a background service (command line application)
    • .net6 command line app
    • connecting to one of the databases above
  • Launching them all using Docker Compose
  • Creating a locally running dotnet Web Application that connects to the database running inside Docker
  • Pointing a SQL ide at the database so that we can interrogate the data and update the Database

The local development environment will look something like this…

Both the database and background service will be running locally within Docker containers. We can then continue to build and run the application locally, outside of Docker, connecting through to the database running inside Docker. The background service can also sit there ticking along without having to run that separately locally.

We will look at how to set up persistent storage for the database so that we are able to retain the database state, even when we shut Docker down. We will show you how to stand up the database with the tables and stored procedures needed by the application and seed with test data.

The background service will be configured to connect to our database and demonstrate how we can connect to the database from within a Docker container.

Docker basics

Before diving in, first a bit about Docker, in case you are coming to this afresh. Docker allows you to create, well, more define really, the infrastructure and application into a single lump (an image in Docker speak) that can then be instantiated and run within a container. This approach allows you to run them locally and know that you will get exactly the same behaviour when you deploy it into another environment.

Docker can be used in isolation for development too and can bring with it many benefits:

  • Every developer on the team gets their own sandbox - no shared resources such as the “dev” database
  • Encourages you to script out the database which allows you to version changes alongside the application code
  • Removes external dependencies to database servers (network or cloud instances), which can therefore reduce costs

Summarising the key Docker definitions relevant to this blog…

Term Description
Image Describes exactly what is to be instantiated when loaded into a container. Which OS, application definitions and configurations
Dockerfile A text file that describes how an image is to be built
Container A running instance of a Docker image
Docker compose This is a file that allows you to build a collection of containers that inter-operate with one another
Volume This is used to persist data between container instantiations. We will be able to add data to our database, shut down the machine, restart Docker and stand up the containers and the data will have been retained.

Setting up the dev env

1) Defining our SqlServer image

Docker images are almost always extending other images (I guess there is a root image somewhere!). The DockerFile shown below builds upon a base SqlServer 2019 image. This is the official and at the time of writing is the latest Microsoft SqlServer 2019 image that runs upon Ubuntu.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
FROM mcr.microsoft.com/mssql/server:2019-CU15-ubuntu-20.04

# Mandatory environment variables needed by the SqlServer image above
ENV ACCEPT_EULA=Y
ENV SA_PASSWORD=!SomethingSecret123

# Copy all of the Database creation scripts into the image and then run the script that makes it happen
COPY ./01_CreateTestSourceDatabases/ /
ENTRYPOINT [ "/bin/bash", "InitialiseIfNotCreated.sh" ]

CMD [ "/opt/mssql/bin/sqlservr" ] 

There are two mandatory environment variables that have to be specified:

  1. ACCEPT_EULA This needs to be set to Y and acknowledges that you have of course read the end user agreement in full and are happy with it
  2. SA_PASSWORD This will be the password that the System Administrator account (SA) is given. We’ll use this in our connection strings.

The key part of this DockerFile is the Database creation process (lines 8 and 9). This copies the folder with the creation scripts into the image and specifies the entrypoint for the image that will get called when the container is started. This entry point is a script that will initialise the database if it doesn’t already exist, or if it does will launch SqlServer 2019.

The inisialisation script is split out into 3 scripts that will…

  1. Create the database
  2. Build out all of the tables, views and stored procedures
  3. Seed the database with test data
Note
====
The bike store database is the sample taken from sqlservertutorial.net
    https://www.sqlservertutorial.net/sql-server-sample-database/

Initialisation script

The initialisation script is shown in full below.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
#!/bin/bash
set -e

if [ "$1" = '/opt/mssql/bin/sqlservr' ]; then

  # If this is the container's first run, initialise the application database
  if [ ! -f /tmp/app-initialised ]; then

    # Initialise the application database asynchronously in a background process. 
    # This allows 
    #     a) the SQL Server process to be the main process in the container, which allows graceful shutdown and other goodies, and 
    #     b) us to only start the SQL Server process once, as opposed to starting, stopping, then starting it again.
    function initialise_app_database() {

      # Wait a bit for SQL Server to start. SQL Server's process doesn't provide a clever way to check if it's up or not, 
      # and it needs to be up before we can import the application database
      sleep 15s

      #run the setup script to create the DB and the schema in the DB
      /opt/mssql-tools/bin/sqlcmd -S localhost -U sa -P !SomethingSecret123 -d master -i 01_create-src-db.sql
      /opt/mssql-tools/bin/sqlcmd -S localhost -U sa -P !SomethingSecret123 -d bike-store-db -i ./ddl/02_create-bike-store-db.sql
      /opt/mssql-tools/bin/sqlcmd -S localhost -U sa -P !SomethingSecret123 -d bike-store-db -i ./data/03_data-bike-store-db.sql

      echo "Tell that aardvark it's a wrap."
      # Note that the container has been initialised so future starts won't wipe changes to the data
      touch /tmp/app-initialised
    }

    initialise_app_database &

  fi

fi

exec "$@"

Script explained

This is a bash script that will run inside the Ubuntu container. Some of the key bits are explained below:

  • Line 7 checks for the existance of file called ‘app-initialised’, if it exists it skips the buildout process
  • Line 28 is the command that puts this file there once it the database has been created
  • Line 17 is a 15 second delay that is needed to give the container enough time to have fired up SqlServer before we start using it to create the database
  • Lines 20 creates the database
  • Lines 21 builds out the database (runs the ddl to create the tables and sprocs etc)
  • Lines 22 populates the tables with seed data

The ‘01_create-src-testcases-db.sql’ script

There is not too much to this script. The initialisation script runs this script against the SqlServer master DB and creates a new database called bike-store-db and then outputs a log message to say that it has been created (useful for when we are checking the container status as it loads).

1
2
3
4
create database [bike-store-db]
go

PRINT 'DB Created.'

The ‘02_create-bike-store-db.sql’ script

This script runs against the bike-store-db DB we have just created and creates all of the schemas, tables, views and stored procs needed. A portion of the creation script is shown below.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
-- create schemas
CREATE SCHEMA production;
go

CREATE SCHEMA sales;
go

-- create tables
CREATE TABLE production.categories (
	category_id INT IDENTITY (1, 1) PRIMARY KEY,
	category_name VARCHAR (255) NOT NULL
);

... more stuff here ...

CREATE TABLE sales.staffs (
	staff_id INT IDENTITY (1, 1) PRIMARY KEY,
	first_name VARCHAR (50) NOT NULL,
	last_name VARCHAR (50) NOT NULL,
	email VARCHAR (255) NOT NULL UNIQUE,
	phone VARCHAR (25),
	active tinyint NOT NULL,
	store_id INT NOT NULL,
	manager_id INT,
	FOREIGN KEY (store_id) REFERENCES sales.stores (store_id) ON DELETE CASCADE ON UPDATE CASCADE,
	FOREIGN KEY (manager_id) REFERENCES sales.staffs (staff_id) ON DELETE NO ACTION ON UPDATE NO ACTION
);

The ‘03_data-bike-store-db.sql’ script

This script is also run against the bike-store-db DB and populates the tables that have just been created above with data. This script contains 10k lines, a portion of this script is shown below. Note that many of the tables have identity columns and need to have this turned off whilst we seed the tables with data.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
SET IDENTITY_INSERT production.brands ON;

INSERT INTO production.brands(brand_id,brand_name) VALUES(1,'Electra')
INSERT INTO production.brands(brand_id,brand_name) VALUES(2,'Haro')
INSERT INTO production.brands(brand_id,brand_name) VALUES(3,'Heller')
INSERT INTO production.brands(brand_id,brand_name) VALUES(4,'Pure Cycles')
INSERT INTO production.brands(brand_id,brand_name) VALUES(5,'Ritchey')
INSERT INTO production.brands(brand_id,brand_name) VALUES(6,'Strider')
INSERT INTO production.brands(brand_id,brand_name) VALUES(7,'Sun Bicycles')
INSERT INTO production.brands(brand_id,brand_name) VALUES(8,'Surly')
INSERT INTO production.brands(brand_id,brand_name) VALUES(9,'Trek')

SET IDENTITY_INSERT production.brands OFF;

SET IDENTITY_INSERT production.categories ON;
INSERT INTO production.categories(category_id,category_name) VALUES(1,'Children Bicycles')
INSERT INTO production.categories(category_id,category_name) VALUES(2,'Comfort Bicycles')
... 1000's of lines of test data

2) Creating a Docker image to launch a background service

The Docker image for the dotnet command line application is pretty standard. Rider, VisualStudio and VSCode will all auto-generate very similar files if you ask it to add Docker support. It creates two images, one to build and publish the executable, and a second smaller image that only contains the dotnet runtime required to run the application (opposed to also including the bulkier sdk required to build it).

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
FROM mcr.microsoft.com/dotnet/runtime:6.0 AS base
WORKDIR /app

FROM mcr.microsoft.com/dotnet/sdk:6.0 AS build
WORKDIR /src
COPY ["Scheduler/Scheduler.csproj", "Scheduler/"]
RUN dotnet restore "Scheduler/Scheduler.csproj"
COPY . .
WORKDIR "/src/Scheduler"
RUN dotnet build "Scheduler.csproj" -c Release -o /app/build

FROM build AS publish
RUN dotnet publish "Scheduler.csproj" -c Release -o /app/publish

FROM base AS final
WORKDIR /app
COPY --from=publish /app/publish .
ENTRYPOINT ["dotnet", "Scheduler.dll"]

The important thing to remember to do before you build the image is to update the connection string so that it references our Docker hosted SqlServer.

1
2
3
4
5
{
  "ConnectionStrings": {
    "BikeStore": "Server=MyDevSqlServer;Initial Catalog=bike-store-db;Persist Security Info=False;User ID=SA;Password=!SomethingSecret123;MultipleActiveResultSets=False;Encrypt=True;TrustServerCertificate=True;Connection Timeout=30;"
  }
}

Property Notes
Server This wont yet make sense, but will be the name that we give to the container (server)
Initial Catalog This is the name we gave to our database in the creation scripts above. ie. bike-store-db
User ID We haven’t scripted out any special users and will use the default SA account
Password This is the password we specified in the Dockerfile that creates the SqlServer image

3) Docker Compose - pulling it all together

The Docker compose file lets us define multiple containers that may interact with one another and be instantiated together. The following Docker compose file defines two services:

  1. Scheduler - our background service that is built using the WorkHorse/Dockerfile image that we introcuded above in section 2. This will run within a Container called Scheduler but is not accessible outside of Docker.
  2. MyDevSqlServer - a SqlServer 2019 instance that is built using the Dockerfile image we walked through in section 1. This is run within a container called MyDevSqlServer, which is the name of the server that we specified in the Scheduler’s conneciton string appsettings.json. Also note that we have exposed port 51433 to the local machine and mapped this to port 1433 which is the default port that SqlServer accepts connections on.
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
version: "3.9"

services:

  Scheduler:
    image: scheduler:1.0.0
    build:
      context: ../Src/DevelopingDotnetWithDocker/
      dockerfile: ./Scheduler/Dockerfile
    depends_on:
      - MyDevSqlServer
    entrypoint: ["/app/wait-for-it.sh", "MyDevSqlServer:1433", "-t", "120", "--", "dotnet", "Scheduler.dll"]
  
  MyDevSqlServer:
    image: bikestore:1.0.0
    build:
      context: .
      dockerfile: ./Dockerfile
    ports:
      - "51433:1433"

    volumes:
      - mssqldata:/var/opt/mssql

volumes:
  mssqldata:

Some of the magic explained

Data Persistency

Line 23 maps a new data volume to /var/opt/mssql which is the directory that SqlServer stores all of its data files. This data volume, labelled mssqldata is stored outside of the Container within Docker and outlives the containers.

Are you ready yet?

The SqlServer container can take a while to initialise, especially the first time as it will have to create the database, run the ddl creation script and seed the data. Docker compose will try and start the Scheduler as soon as it has launched the MyDevSqlServer container. Line 12 above changes the entrypoint to run /app/wait-for-it.sh which is a script that attepmts to connect to MyDevSqlServer on port 1433 for 2 minutes. Once successful it runs the Scheduler.dll. The wait-for-it.sh script lives in the root of the Scheduler project and is copied to the output directory when built.

Starting the containers

The screenshot below runs through the full lifecycle of commands that you will want to use.

Starting in the ./DevelopingDotnetWithDocker/devenv directory…

1
docker-compose up -d

This starts the two services defined in the compose file, running them in the background (passing in the -d flag for “detached” mode). If the containers are being run for the first time then Docker will automatically build the images. This will mean that it will have to pull down the SqlServer image and dotnet SDK and runtime images from the container registry which may take a while. Once it has these locally however then it will not need to do so again. Note that all of the base images referenced are to specific versions (opposed to :latest) which will mean that we are pinned to these versions and we wont be pulling down a fresh image every time an update is released. This is the recommended approach and puts you in control of when you want to upgrade.

1
docker-compose ps

This lists the processes that are currently running, and its status.

1
docker-compose stop

This stops the container.

1
docker-compose down

This is a useful command and will remove the containers as well. The named data volume will be retained though. You can remove this also by including --volume which is useful when you want to rebuild the database from scratch.

1
docker-compose down --volume

If you are wanting to rebuild the database from scratch then you also need to remember to remove the image.

Connecting to the databases

From inside Docker

If you are wanting to connect to the database from a container running inside Docker…

…then it can reference the server name that you have given your service.

Ie. In the example above: MyDevSqlServer

so, the connection string for the bike-store-db running within the MyDevSqlServer SqlServer instance is:

Server=MyDevSqlServer;Initial Catalog=bike-store-db;Persist Security Info=False;User ID=SA;Password=!SomethingSecret123;MultipleActiveResultSets=False;Encrypt=True;TrustServerCertificate=True;Connection Timeout=30;

From the host machine

If you are wanting to connect to the database from the host OS (outside Docker), then you need to reference localhost,51433…

…then you can reference localhost as the Server, but you will need to specify the port 51433 as it is mapped to 1433 within the container.

Connection String

Server=localhost,51433;Initial Catalog=bike-store-db;Persist Security Info=False;User ID=SA;Password=!SomethingSecret123;MultipleActiveResultSets=False;Encrypt=True;TrustServerCertificate=True;Connection Timeout=30;

Data Grip

Connection Config

DataGrip

Blazor app

Once you have you SqlServer container running then you should be able to launch the Blazor app in the code sample and hitting the Bike Orders page will show you all of the orders from the last 7 days.

If the scheduler is also running then it will be adding a new order every 10 seconds. Refreshing the page will start showing new orders at the top of the list.

Rider makes it right nice and easy

Rider has some nice integration with Docker.

Docker Compose

Like Unit Tests, Rider adds Start buttons to the gutter, next to each service defined within the Docker compose file. This will start, or re-start the container/service.

You can hit the » double play button at the top of the compose file to start all of the services docker-compose up

Services

Rider also has a useful Services tool window that will show you:

  • The containers that are running
  • Which images have been created
  • The data volumes that have been created

This is especially useful when you want to tear down your database and rebuild it.

Stuff to remember

  • The Dockerfile defines how the image is to be built
  • Once you have built an image it is cached by Docker
  • Every time you start a container, it will check to see if the image exists already and use that
  • If you change any of your DB build scripts, or scheduler code then this will not invalidate your Docker image
  • You will need to delete the image if you want Docker to rebuild it afresh (or give it a new label in the Docker-Compose file)
  • Also remember that we are using a named volume to store the SqlServer data files.
    • If you leave this in place, then even if you delete the SqlServer image and rebuild/re-start it then it will:
      • Create a new image
      • Start the container
      • The initialise script wont find the /tmp/app-initialised file and run the creation scripts again
      • These will all fail as the ddl objects already exist and the insert scripts will violate primary keys
    • If you do want to rebuild the image then also delete the named volume (as well as the image). This will then create everything again from the ground up.

Wrapping up

VSCode, VisualStudio and Rider all have support for Docker when creating projects and it has been easy to retrofit into Docker into existing applications by simply dropping the Dockerfile and docker-compose.yml files into repositories.

Having the database running locally in your development environment has brought about many benefits for me.

  • Complete control of my env - it is all running locally. No shared databases, no azure firewall rules to update, no network latency.
  • Encourages better behaviours when it comes to scripting out database changes and makes it easier to manage and test changes
    • I can retain my v1.0 database, build my v.1.1 database, test the build from scratch, test the upgrade process
  • I have a handful of machines that I develop changes on (work, home, laptop etc). Being able to pull down my repository branch and rebuild the db containers in a couple of minutes improves productivity

It has taken a while to get my head around it all, especially at the start when restarting containers was not picking up changes that I had made locally that were not in the cached image being used.

I haven’t yet used Docker in production, but do have it on the todo list!

The sample application can be found on the CortexGG github.

That is pretty much it. A slightly longer post than planned!

Any comments or feedback then please fire them across to me on the twitters: @MarcBeavan mastodon - see link in footer.