Building Docker Image for ASP.Net Application

This article will describe building and deploying an ASP.Net sample web application for Docker. We’ll be using Docker Desktop for Windows and Visual Studio 2022 Developer Edition.

To learn how to install Docker Desktop on Windows 10 please refer to my article

ASP.Net web application can only be deployed on Windows Container. However, .Net Core application can be deployed either at Linux or at Windows container. To learn more about deploying .Net Core application refer to my article

Let’s follow the steps to create the project:

  • Open Visual Studio 2022 and create a new project
  • Let’s name it AspWebApp4Docker
  • Check Docker Support on the next screen. As this is a ASP.Net application so only Windows Container is supported.
  • Click Create and there will be pulling of Docker Image for Windows Server Core will be started on a separate command prompt.
  • The output window of Visual Studio will be shown something like this
  • Once done this will try to run the project from Docker Image. But it didn’t work for me !
  • Let’s remove the image from Docker.
  • Find the image at Power Shell
    • docker ps
  • It’ll be something like aspwebapp4docker:dev. Stop it and then remove
    • docker stop aspwebapp4docker:dev
    • docker rm aspwebapp4docker:dev
  • Now go to the Docker File at the project directory. It’ll be look like:
  • Let’s comment out this code using #infront of each line. Yes, that’s how we comment a docker file.
  • Now We’ll publish the application before put it into docker.
  • From Visual Studio Solution Explorer Select Publish.
  • We’ll publish it locally. Select folder to publish on next screen and put the folder name on subsequent screen.
  • Finally do Publish
  • This publish folder will be available at your project root directory bin folder.
  • Now we’ll tell Docker to use this location to build the image
  • Open the Docker File and put the following:


COPY ./bin/app.publish/ /inetpub/wwwroot

  • Here we are instructing to use the windows server core image and copying the published application to /inetpub/wwwroot
  • Now we’ll build the project form Windows Power Shell. Please note that as we are already done with pulling the windows server from Microsoft Container Registry while creating the project so this time it will use the cache to proceed.
  • At Power Shell change location to project root directory. Now build the project
    • docker build -t aspwebapp4docker .
  • The image is named to aspweb4docker. We can also do a tagging using the name:tag format. As we are not assignign any tag so the default tag is set to latest.
  • Now run the image
    • docker run -d -p 4000:80 --name aspwebapp4docker1.0 aspwebapp4docker
  • Once done it’ll show the id of the image.
  • This can also be viewed from Power Shell
    • docker ps
  • We can also view from Docker desktop


Building Your First .Net Core Docker Image

This article will describe building and deploying a .Net Core sample application for Docker. We’ll be using Docker Desktop for Windows and Visual Studio 2022 Developer Edition.

To learn how to install Docker Desktop on Windows 10 please refer to my article

Deploying on Linux Container

Let’s follow the steps to create the project:

  • Open Visual Studio 2022 and create a new project
  • Let’s name it CoreWebApp4Docker
  • Enable Docker on the next screen and select Linux as container type
  • Now make some changes at the index.cshtml file to display some customize text after we’ve deployed. Change at Line No 9 to Learn about building Web apps with ASP.NET Core and deploying at Docker.
  • Let’s have a look at the Docker file which is created automatically by Visual Studio.Net
  • The Docker file contains build instruction to deploy the image at Docker. This is known as Docker Multistage build feature. The multistage build allows container images to be created in stages that produce intermediate images. 
  • At the top the Docker Multistage build file following section is telling Docker to create ASP.Net image from Microsoft Container Registry. It’s creating an intermediate image named base that exposes Port 80 and set working directory to /app
FROM AS base
  • The next stage is Build which starts with a different original image from registry which is sdk. The sdk image has the build tools and it’s quite bigger than the aspnet image. The aspnet image only contains the runtime.

FROM AS build
COPY ["CoreWebApp4Docker.csproj", "."]
RUN dotnet restore "./CoreWebApp4Docker.csproj"
COPY . .
WORKDIR "/src/."
RUN dotnet build "CoreWebApp4Docker.csproj" -c Release -o /app/build

  • The next step is publishing the image from build

FROM build AS publish
RUN dotnet publish "CoreWebApp4Docker.csproj" -c Release -o /app/publish /p:UseAppHost=false

  • The final stage is using the base which have only the runtime and copy the recently published folder to the final image. As the final image is only containing the runtime so it’ll be quite smaller that the build image.

FROM base AS final
COPY --from=publish /app/publish .
ENTRYPOINT ["dotnet", "CoreWebApp4Docker.dll"]

  • So we are done with the Dockerfile. Visual Studio 2022 have built in feature to build the application as Docker image.
  • However we won’t do this. We’ll use WSL command line to build the application.
  • At first run your Docker Desktop and ensure that you are running Linux Container. If you find the option to “Switch to Windows Containers…”, this means you are on Linux Container.
  • Open Ubuntu Command Shell and see whether Docker is running:
    • docker
  • If you find docker help is showing that means Docker is running as Linux container
  • Build the image using the Docker file created by Visual Studio
  • Browse to the directory where Docker File exists
  • For me it is
    • cd /mnt/c/Users/sultan/source/repos/CoreWebApp4Docker
  • Note that at WSL your Windows drive will be mounted under /mnt/
  • Now build using docker build
    • docker build -t corewebapp4docker .
  • We named the build as corewebapp4docker and the . at the end indicates the current directory where Docker Files exists
  • The name can be given as name:tag format otherwise it’ll be tagged as corewebapp4docker:latest
  • The output will be like this:
  • Now we’ll run the image
    • docker run -d -p 5000:80 --name corewebapp4docker1.0 corewebapp4docker
  • The -d is telling to run the image at background and show the container Id only. -p is saying to map host port 5000 to Docker port 80. --name is naming the image and the last one is the image name that we’ve used during build.
  • Once done you can list the running image using
    • docker ps
  • We can also view the image at Docker Desktop
  • That’s all we are done with deploying our .Net core application to Docker.

Deploying on Windows Container

  • Now we’ll deploy the same project to Windows Container.
  • At first we need to stop and remove the existing image
    • docker stop corewebapp4docker1.0
    • docker rm corewebapp4docker1.0
  • We can run docker ps to be sure that we are done
  • Now let’s change the container to Windows Container from Docker Desktop
  • Now we will use Windows Power Shell to deploy and run
  • Power Shell must be opened with Administrator privilage
  • We’ll follow the same steps we’ve used for Linux containers
    • docker build -t corewebapp4docker .
    • docker run -d -p 5000:80 --name corewebapp4docker1.0 corewebapp4docker
  • And we are done. We can list the image using docker ps. Or we can see the Docker Desktop Window.
  • The application will be browsable from http://localhost:5000
  • There might be issues with accessing to download nugget package. In that case at Docker –> Setting –> Docker Engine add the following json tag with the existing:
    • "dns" : [""]
  • The setting will be look like
  • Now this image is locally available. To share the image we have to publish the image to Docker Hub.

Publishing to Docker Hub

  • To register the image into the Docker Hub we need to have an account over there.
  • Go to and create an account
  • Now create a repository over there where we can publish the app.
  • Now come on to the Power Shell again and login to Docker Hub with the user name just we’ve created
    • docker login -u sultanalam
  • We need to tag the currently build image with the username/repository name we’ve just created
    • docker tag corewebapp4docker sultanalam/corewebapp4docker1.0
  • Now pushing the image to the hub using the new tag
    • docker push sultanalam/corewebapp4docker1.0
  • The image will be available under the repository we’ve created


Configuring Docker on Windows 10

This article will describe the ways to enable a Windows 10 Machine to support docker.

You can run docker on Windows 10 using either of the following ways:

  • Linux Container
  • Windows Container

Enabling Linux Container:

  • The OS should be either Windows 10 64-bit Home or Pro 21H1 (build 19043) or higher. The version can be checked from PowerShell
    • winver
  • 64-bit processor
  • 4GB system RAM
  • BIOS-level hardware virtualization support must be enabled in the BIOS settings
  • The WSL2 (Windows Subsystem for Linux) feature must be enabled.
  • Open PowerShell in Administrator mode and run
    • wsl --install
  • You’ll be prompted for confirmation to install. Click yes to proceed.
  • Once done the following installation screen will be shown:
  • You need to reboot the system to effect the changes.
  • After reboot you can find Ubuntu (this is the default distribution) at you program menu.
  • Select Ubuntu to finish installation.
  • You will be prompted for setting your UNIX user & password (which is not necessarily your windows credential but can be anything you prefer)
  • The default UNIX/ Ubuntu user will be the username you set here.
  • If you select Ubuntu from program menu once again you will get the Ubuntu terminal
  • If you forget this default user password you can login using root and reset the passowrd.
  • To change the default user to root run windows command prompt as Administrator
    • ubuntu config --default-user root
  • Run Ubuntu from program menu once again and you’ll find the default user is set to root.
  • To change password
    • passwd username
  • Type the new password and reconfirm
  • Once done you can change the default user again
  • From PowerShell you can check the default distribution installed and can list down other distribution (if any) as well
    • wsl --list

Installing Docker:

Docker for Windows can be downloaded from:

  • Run the installer and Select “Use WSL2 instead of Hyper-V” option. However, if your Windows does not support Hyper-V (like you have Windows 10 Home Edition) you won’t get this option.
  • You need to logout from Windows to finish the installation
  • Once done you’ll find Docker Desktop is running
  • From PowerShell or Ubuntu you run
    • docker
  • Docker container on Windows can run on both as Linux Container and Windows Container
  • Select Docker Desktop from task bar and you can switch between Linux and Windows Container
  • As currently we have configured Docker for Linux Container (which is by default) so switching to Windows Container option are shown here

However switching from the taskbar won’t necessarily switch the container to Windows Container. Few more tasks are needed to be done as stated at the following section.

Enabling Windows Container:

  • Windows Container can be installed only if you have Windows 10 Pro edition
  • From start menu type “Turn Windows Feature on or off” and select Container & Hyper-V option
  • Once installation is done you’ll need to restart the PC


Microservice Architecture

Microservice Architecture is one of the new buzzes in the software architecture arena. There are a lot of questions, confusions and contradictions about this pattern. Here I am trying to summarize the entire concept which might seems a bit over simplification. But yet it can be a very good starting point of understanding the basic concepts.

The Traditional Architecture (The Monolith)

Let us start with the way we’ve been developing our software. Considering a simple e-commerce store front application where it is able to take orders from customers, verifies inventory, available credit and ships item.

We might assume the following:

  • It has both mobile and web interface
  • It has the API to expose the services to some third party applications
  • It has the ability to handle http requests and therefore deployed in a web server like IIS or Apache
  • It has the ability to executing business logic
  • It has access to relational database like MS SQL or Oracle
  • And many more

Therefore, the application might have the following architecture:

  • A Store Front UI (Mobile/ Web/ Desktop)
  • Accounting Service
  • Inventory Service
  • Shipping Service
  • Business Layer
  • Data Access Layer
  • And whatever you think feasible

Fig: A Traditional Monolithic Architecture

This application would be a simple monolithic one which can be deployed at IIS or Apache web server or whatsoever you prefer for. A hardware or software load balancer can be incorporated to scaling up and improve availability as well.

Now consider the advantages that we have with this architecture:

  • Simple and quick development cycle
  • Easy to deploy
  • Easy to scale

However, once the application becoming to grow, more and more modules are adding on it will be:

  • Difficult to maintain the large code base
  • Take more time to introduce new developers
  • Bulky IDE
  • Difficult to patching update
  • Continuous deployment require rigorous testing
  • Stuck to the technology

The Microservice

Now let us think it on a different way. Instead of having everything all together (like the way we’ve described above) it might be designed in other approach where:

  • It is designed as a set of loosely coupled, collaborating services
  • Each of the services consist of its own business logic and database
  • The services can run independently or/ and interdependently with other third party services

Fig: Microservice Architecture

The benefits this architecture will provide:

  • Easier for a developer to get in
  • More responsive IDE
  • Easier to deploy new version of any service
  • Easier to scaling development between different teams
  • Eliminate the technology stuck
  • Improved fault isolation

Anyway, nothing comes without disadvantages and the drawbacks are:

  • Developers have to deal with the additional complexity of distributed system
  • Deployment is still an issue due to the complex nature of managing different service types
  • Requires more effort on integration testing

Characteristics of Microservice

So far we’ve a basic understandings of both the architectures. Let us try to define Microservice. From James Lewis and martin Fowler:

“In short, the Microservice architectural style is an approach to developing a single application as a suite of small services, each running in its own process and communicating with lightweight mechanisms, often an HTTP resource API. These services are built around business capabilities and independently deployable by fully automated deployment machinery. There is a bare minimum of centralized management of these services, which may be written in different programming languages and use different data storage technologies.”

Well enough? Hard to get the definition? Let us put things in a simpler way. Instead of having a formal definition let us try to define the characteristics of Microservice:

  • Componentization via Services – The service must be independently upgradeable and replaceable.
  • Organized around Business Capabilities – The team must be sufficiently capable of delivering the end user business needs themselves, no segregation of team considering like UI developer, DBA and so on.
  • Products not Projects – A team developing the product will own the product for its full lifetime, no project ownership but product.
  • Smart endpoints and dumb pipes – No enterprise service bus (ESB), the endpoint should be intelligent to handle end to end requirements.
  • Decentralized Governance – Managing of services along with the underlying technology are not standardized centrally. Each service development team have their own choices to go with the exact need for that service.
  • Decentralized Data Management – No central database for all the services. Each service talks to their own database.
  • Infrastructure Automation – Unlike monolith automation of infrastructure is an utmost requirement for continuous delivery and integration.
  • Design for failure – Need to be designed so that any service failure can respond to other as graceful as possible.
  • Evolutionary Design – Enable application developer to control changes in their services without slowing down handling change requirement.

Monolith vs Microservice

Now let us try to differentiate both the architecture in the easiest form. The big advantage of Monolith is, it is simple and relatively easy to use. We are used to with this architecture for so long and it is proven to work for. However, when the question of continuous deployment comes in, Microservice is easy to deploy. If we want to upgrade a monolith we’ve to upgrade the entire thing. Whereas, Microservice can do the upgrade only for the components that is require to upgrade. This becomes crucial when continuous upgradation is a part of development, upgrades are happening not once in a month but almost every week and even every day.

Another great advantage of Microservice is availability. If one of our services goes down, still we can run the other services. However, the tradeoff here is the consistency. It is always harder to maintain consistency for Microservice compare to Monolith. There is a lot of components communicating each other through network layers and one mess can mess the whole.

In monolith it is easier to refactor between modules. But yes we’ve to understand the module boundary perfectly to do so. But still if we’ve some issue we can easily do the refactor. But for Microservice doing inter module refactoring means a lot. It’s not only copying the class and objects from here to there but it means to change at the different service layer. However, it turns to the point that Microservice ensures to preserve modularity because we are bound to do so. It is kind of a discipline that forces us to keep the modularity.

Last but not least, Microservice helps to go for multiple platform. We can have one module written on C# and the other in Java or whatsoever is suitable for. Which is almost not easily practicable in Monolith.

Microservice and SOA

The major question on this arena is that are Microservice just SOA? Before answering this let us try to revisit what SOA means? From Don Box:

“There are four tenets of service-orientation

  • Boundaries are explicit
  • Services are autonomous
  • Services share schema and contract, not class
  • Service compatibility is based on policy”

For Microservice the first two tenets are satisfied with the real emphasis on the second. In fact, we can consider Service Oriented Architecture (SOA) is something like our enterprise service bus which is a very broad terminology and Microservice is just a subset of SOA.

Wrapping Up

So, that’s all! A bit over simplification of Microservice Architecture but I think it can be a very good starting point. If we compare with Monolith it is all about the tradeoff we want to make. We just cannot come in a conclusion that the Monolith era is over, rather we should say that Microservice comes with the answer of few burning questions of Monolith. However, you should go for Microservice if you really need it. It is not a fancy type of thing that you explore at your production without proper planning.


Happy Micro servicing  🙂

Always Encrypted on SQL 2016

History of database encryption is quite old. If we look behind to SQL 2000 era back in the beginning of this century there were no native tools to do so. But several third party tools were always available to encrypt the data at rest. Even another popular choice was to encrypt the entire data disk. Later on, SQL 2005 introduces call level encryption. But a big change came on SQL 2008 which brings the concept of Transparent Data Encryption (TDE). But once again like the third party encryption tool TDE can’t encrypt data in motion. Therefore, data in memory or in the communication channel are still in the risk. Moreover, TDE requires the entire database to be encrypted which increases the size of the database.

Why Database Encryption

  • Separation of role between who own data and who manage data
  • Protecting sensitive data e.g. Credit Card Number, National ID
  • Running database and/or application in the cloud
  • Prevent high-privileged users from having access to sensitive data
  • Delegation of DBA role
  • Regulatory Compliance and Audits

Always Encrypted on SQL 2016

SQL 2016 comes into play with a solution of all the above “Always Encrypted” meaning data is encrypted and will be remain that wherever it resides except from the user who own data. Always encryption provides:

  • A transparent end to end solution for sensitive columns
  • All encryption and decryption is handled transparently by the driver library on the client
  • Allows clients to encrypt sensitive data inside client applications and never reveal the encryption keys to SQL Server
  • Data is never in plain text while being stored or accessed while on SQL Server (including while in memory)

How it works

The underlying working principal for always encrypted is quite straight forward. It requires the following to start up:

  • A Column Encryption Key (CEK) to encrypt the selected columns.
  • A Column Master Key (CMK) is required to encrypt the CEK.
  • Selection of the encryption algorithm.
  • Selection of the column to be encrypted.
  • A certificate store (either windows or Azure) to store the certificate.
  • Installing the certificates where the client application runs.

Let’s get into detail of these in the following sections.

Always Encrypted Keys

Always Encrypted uses two types of keys: column master keys and column encryption keys

Column Master Keys (CMK):

  • To encrypt column encryption keys
  • Encrypted values of the keys along with their location are stored on system catalog view
  • SQL Server does not contain the keys needed to decrypt data
  • Must be stored in a trusted key store
  • Column Master Keys must be deployed on each client machine that needs access to the unencrypted data

Column Encryption Keys (CEK):

  • To encrypt sensitive data stored in database column
  • A single key can encrypt all values in a column/ table
  • Encrypted values of the keys are stored on system catalog view
  • Store this key in a secured/ trusted location for backup
  • Each CEK can have 2 encrypted values from 2 CMKs to allow master key rotation

Type of Encryption

There are two types of encryption deterministic and randomized.

Deterministic encryption:

  • Generate same encrypted value for a given text
  • Allows grouping, filtering and joining
  • Should be used on column used for primary key, indices or unique constraints
  • Better chance of data decryption by unauthorized user by examining the pattern especially when applied to a smaller set of data

Randomized encryption:

  • Encrypting data in a less predictable manner
  • More secure because different set of data is generated for same plain text
  • Prevents equality searches, grouping, indexing and joining

Creating your first Always Encrypted Object

In this step by step example we’ll be creating a very basic HR database with an Employee table consisting of only four column ID, Name, Sex and Salary. We’ll encrypt the salary column using deterministic encryption because salary can be compared for different range of value. As the Sex column will have a very small data set (Male or Female) it is better to use randomized encryption for this data otherwise data can be easily presumed.

Here are the steps:

  • Create HR DB
  • Create Table employee(
    Salary NUMERIC(10,2) NOT NULL)
  • Let’s insert few data:
    INSERT INTO Employee
    VALUES (1,'David','Male',10000)
    INSERT INTO Employee
    VALUES (2,'Lily','Female',20000) 
  • Right click on Employee table and select Encrypt Columns…
  • The Encryption wizard will appear. Click Next on it
  • Select the column you want to encrypt. Here Sex and Salary columns are selected for encryption. Sex column will be encrypted using randomized algorithm and Salary column will be encrypted using deterministic algorithm. An auto generated new Column Encryption Key will be created (CEK_Auto1)
  • Click next to create the Column Master Key. Select the auto generated option and save it to Windows Certificate Store in Local Machine.
  • Proceed next to continue
  • The wizard will execute the scripts.
  • Finally press close when it is done.
  • That’s all now we have things ready.
  • Running a select statement will show encrypted value for the rows inserted earlier.
  • The CEK and CMK can be found under Always Encrypted Keys Option of HR DB
  • The CMK certificate can be found at Windows Certificate Store of local machine. Run certmgr.msc and select Certificates under Intermediate Certification Authorities. The Always Encryption Auto Certificate1 can be found there.
  • You need to install this certificate to the client machine from where your application runs. You can export this certificate by right clicking on the certificate name and select export. The certificate can be imported by just double clicking on any client machine.

Please ensure that the encryption wizard runs on some other machine than the DB server. Because the DBA will have complete access to the DB server and if the key is available on the DB server the DBA can easily have a look to the decrypted value.

Well, so far we are done with creation of the KEY and certificates. Now we want to have a look our original data. If you have the certificate installed on a machine and there you have a SQL Management Studio you can easily view the data by adding additional connection parameter.


Enable the client application to read and write the Encrypted Object

As we’ve already seen that adding the additional connection parameter allowing us to view the encrypted data where the certificate is installed, similarly for client application we just need to add the same string to the connection properties.

If you are running a MVC web application on a Windows 2012 R2 server. You need to install the certificates on this server before you change the web.config.

Controlling the performance of Data Layer

Controlling the performance of data access layer is a major fact to consider while encrypting your data. Here are some rule of thumb.

When most of the queries access encrypted column:

  • Enable the encryption at connection string
  • SqlCommandColumnEncryptionSetting.Disabled for query that do not access encrypted column
  • SqlCommandColumnEncryptionSetting.ResultSet that do not have any parameter requiring encryption but retrieve encrypted column

When most of the queries do not need to access encrypted column:

  • Disable the encryption at connection string
  • SqlCommandColumnEncryptionSetting.Enabled for query that have encrypted parameters
  • SqlCommandColumnEncryptionSetting.ResultSet that do not have any parameter requiring encryption but retrieve encrypted column


There are some data types as well as column type which are not supported to be a candidate for always encryption:

  •  XML
  • timestamp/ rowversion
  • image
  • ntext/ text
  • sql_variant
  • hierarchyid
  • geography/ geometry
  • User defined type
  • Non Binary2 Collation string data type
  • Alias
  • Sparse column set
  • Partitioning columns
  • Columns with default constraints/ check constraints
  • Referencing column can’t be encrypted with randomized option (for deterministic option the CEK must be the same)
  • Columns that are keys of full text indices
  • Columns referenced by computed columns when the expression does unsupported operations
  • Columns referenced by statistics
  • Table variable columns

Clause that can’t be used:


Features that are not supported:

  • Transactional or Merge Replication
  • Distributed Queries (linked servers)


There are other things like key rotation, migrating from legacy system especially for Entity Framework and many more. Here are some useful links which can be a good candidate for starting up.

Always Encrypted (Database Engine):

Always Encrypted (Client Development):

Column Master Key Rotation and Cleanup with Always Encrypted:

Import/Export Windows Cert:

Happy Encrypting :)