Microservice Architecture

Microservice Architecture is one of the new buzzes in the software architecture arena. There are a lot of questions, confusions and contradictions about this pattern. Here I am trying to summarize the entire concept which might seems a bit over simplification. But yet it can be a very good starting point of understanding the basic concepts.

The Traditional Architecture (The Monolith)

Let us start with the way we’ve been developing our software. Considering a simple e-commerce store front application where it is able to take orders from customers, verifies inventory, available credit and ships item.

We might assume the following:

  • It has both mobile and web interface
  • It has the API to expose the services to some third party applications
  • It has the ability to handle http requests and therefore deployed in a web server like IIS or Apache
  • It has the ability to executing business logic
  • It has access to relational database like MS SQL or Oracle
  • And many more

Therefore, the application might have the following architecture:

  • A Store Front UI (Mobile/ Web/ Desktop)
  • Accounting Service
  • Inventory Service
  • Shipping Service
  • Business Layer
  • Data Access Layer
  • And whatever you think feasible

Fig: A Traditional Monolithic Architecture

This application would be a simple monolithic one which can be deployed at IIS or Apache web server or whatsoever you prefer for. A hardware or software load balancer can be incorporated to scaling up and improve availability as well.

Now consider the advantages that we have with this architecture:

  • Simple and quick development cycle
  • Easy to deploy
  • Easy to scale

However, once the application becoming to grow, more and more modules are adding on it will be:

  • Difficult to maintain the large code base
  • Take more time to introduce new developers
  • Bulky IDE
  • Difficult to patching update
  • Continuous deployment require rigorous testing
  • Stuck to the technology

The Microservice

Now let us think it on a different way. Instead of having everything all together (like the way we’ve described above) it might be designed in other approach where:

  • It is designed as a set of loosely coupled, collaborating services
  • Each of the services consist of its own business logic and database
  • The services can run independently or/ and interdependently with other third party services

Fig: Microservice Architecture

The benefits this architecture will provide:

  • Easier for a developer to get in
  • More responsive IDE
  • Easier to deploy new version of any service
  • Easier to scaling development between different teams
  • Eliminate the technology stuck
  • Improved fault isolation

Anyway, nothing comes without disadvantages and the drawbacks are:

  • Developers have to deal with the additional complexity of distributed system
  • Deployment is still an issue due to the complex nature of managing different service types
  • Requires more effort on integration testing

Characteristics of Microservice

So far we’ve a basic understandings of both the architectures. Let us try to define Microservice. From James Lewis and martin Fowler:

“In short, the Microservice architectural style is an approach to developing a single application as a suite of small services, each running in its own process and communicating with lightweight mechanisms, often an HTTP resource API. These services are built around business capabilities and independently deployable by fully automated deployment machinery. There is a bare minimum of centralized management of these services, which may be written in different programming languages and use different data storage technologies.”

Well enough? Hard to get the definition? Let us put things in a simpler way. Instead of having a formal definition let us try to define the characteristics of Microservice:

  • Componentization via Services – The service must be independently upgradeable and replaceable.
  • Organized around Business Capabilities – The team must be sufficiently capable of delivering the end user business needs themselves, no segregation of team considering like UI developer, DBA and so on.
  • Products not Projects – A team developing the product will own the product for its full lifetime, no project ownership but product.
  • Smart endpoints and dumb pipes – No enterprise service bus (ESB), the endpoint should be intelligent to handle end to end requirements.
  • Decentralized Governance – Managing of services along with the underlying technology are not standardized centrally. Each service development team have their own choices to go with the exact need for that service.
  • Decentralized Data Management – No central database for all the services. Each service talks to their own database.
  • Infrastructure Automation – Unlike monolith automation of infrastructure is an utmost requirement for continuous delivery and integration.
  • Design for failure – Need to be designed so that any service failure can respond to other as graceful as possible.
  • Evolutionary Design – Enable application developer to control changes in their services without slowing down handling change requirement.

Monolith vs Microservice

Now let us try to differentiate both the architecture in the easiest form. The big advantage of Monolith is, it is simple and relatively easy to use. We are used to with this architecture for so long and it is proven to work for. However, when the question of continuous deployment comes in, Microservice is easy to deploy. If we want to upgrade a monolith we’ve to upgrade the entire thing. Whereas, Microservice can do the upgrade only for the components that is require to upgrade. This becomes crucial when continuous upgradation is a part of development, upgrades are happening not once in a month but almost every week and even every day.

Another great advantage of Microservice is availability. If one of our services goes down, still we can run the other services. However, the tradeoff here is the consistency. It is always harder to maintain consistency for Microservice compare to Monolith. There is a lot of components communicating each other through network layers and one mess can mess the whole.

In monolith it is easier to refactor between modules. But yes we’ve to understand the module boundary perfectly to do so. But still if we’ve some issue we can easily do the refactor. But for Microservice doing inter module refactoring means a lot. It’s not only copying the class and objects from here to there but it means to change at the different service layer. However, it turns to the point that Microservice ensures to preserve modularity because we are bound to do so. It is kind of a discipline that forces us to keep the modularity.

Last but not least, Microservice helps to go for multiple platform. We can have one module written on C# and the other in Java or whatsoever is suitable for. Which is almost not easily practicable in Monolith.

Microservice and SOA

The major question on this arena is that are Microservice just SOA? Before answering this let us try to revisit what SOA means? From Don Box:

“There are four tenets of service-orientation

  • Boundaries are explicit
  • Services are autonomous
  • Services share schema and contract, not class
  • Service compatibility is based on policy”

For Microservice the first two tenets are satisfied with the real emphasis on the second. In fact, we can consider Service Oriented Architecture (SOA) is something like our enterprise service bus which is a very broad terminology and Microservice is just a subset of SOA.

Wrapping Up

So, that’s all! A bit over simplification of Microservice Architecture but I think it can be a very good starting point. If we compare with Monolith it is all about the tradeoff we want to make. We just cannot come in a conclusion that the Monolith era is over, rather we should say that Microservice comes with the answer of few burning questions of Monolith. However, you should go for Microservice if you really need it. It is not a fancy type of thing that you explore at your production without proper planning.


Happy Micro servicing  🙂

Always Encrypted on SQL 2016

History of database encryption is quite old. If we look behind to SQL 2000 era back in the beginning of this century there were no native tools to do so. But several third party tools were always available to encrypt the data at rest. Even another popular choice was to encrypt the entire data disk. Later on, SQL 2005 introduces call level encryption. But a big change came on SQL 2008 which brings the concept of Transparent Data Encryption (TDE). But once again like the third party encryption tool TDE can’t encrypt data in motion. Therefore, data in memory or in the communication channel are still in the risk. Moreover, TDE requires the entire database to be encrypted which increases the size of the database.

Why Database Encryption

  • Separation of role between who own data and who manage data
  • Protecting sensitive data e.g. Credit Card Number, National ID
  • Running database and/or application in the cloud
  • Prevent high-privileged users from having access to sensitive data
  • Delegation of DBA role
  • Regulatory Compliance and Audits

Always Encrypted on SQL 2016

SQL 2016 comes into play with a solution of all the above “Always Encrypted” meaning data is encrypted and will be remain that wherever it resides except from the user who own data. Always encryption provides:

  • A transparent end to end solution for sensitive columns
  • All encryption and decryption is handled transparently by the driver library on the client
  • Allows clients to encrypt sensitive data inside client applications and never reveal the encryption keys to SQL Server
  • Data is never in plain text while being stored or accessed while on SQL Server (including while in memory)

How it works

The underlying working principal for always encrypted is quite straight forward. It requires the following to start up:

  • A Column Encryption Key (CEK) to encrypt the selected columns.
  • A Column Master Key (CMK) is required to encrypt the CEK.
  • Selection of the encryption algorithm.
  • Selection of the column to be encrypted.
  • A certificate store (either windows or Azure) to store the certificate.
  • Installing the certificates where the client application runs.

Let’s get into detail of these in the following sections.

Always Encrypted Keys

Always Encrypted uses two types of keys: column master keys and column encryption keys

Column Master Keys (CMK):

  • To encrypt column encryption keys
  • Encrypted values of the keys along with their location are stored on system catalog view
  • SQL Server does not contain the keys needed to decrypt data
  • Must be stored in a trusted key store
  • Column Master Keys must be deployed on each client machine that needs access to the unencrypted data

Column Encryption Keys (CEK):

  • To encrypt sensitive data stored in database column
  • A single key can encrypt all values in a column/ table
  • Encrypted values of the keys are stored on system catalog view
  • Store this key in a secured/ trusted location for backup
  • Each CEK can have 2 encrypted values from 2 CMKs to allow master key rotation

Type of Encryption

There are two types of encryption deterministic and randomized.

Deterministic encryption:

  • Generate same encrypted value for a given text
  • Allows grouping, filtering and joining
  • Should be used on column used for primary key, indices or unique constraints
  • Better chance of data decryption by unauthorized user by examining the pattern especially when applied to a smaller set of data

Randomized encryption:

  • Encrypting data in a less predictable manner
  • More secure because different set of data is generated for same plain text
  • Prevents equality searches, grouping, indexing and joining

Creating your first Always Encrypted Object

In this step by step example we’ll be creating a very basic HR database with an Employee table consisting of only four column ID, Name, Sex and Salary. We’ll encrypt the salary column using deterministic encryption because salary can be compared for different range of value. As the Sex column will have a very small data set (Male or Female) it is better to use randomized encryption for this data otherwise data can be easily presumed.

Here are the steps:

  • Create HR DB
  • Create Table employee(
    Salary NUMERIC(10,2) NOT NULL)
  • Let’s insert few data:
    INSERT INTO Employee
    VALUES (1,'David','Male',10000)
    INSERT INTO Employee
    VALUES (2,'Lily','Female',20000) 
  • Right click on Employee table and select Encrypt Columns…
  • The Encryption wizard will appear. Click Next on it
  • Select the column you want to encrypt. Here Sex and Salary columns are selected for encryption. Sex column will be encrypted using randomized algorithm and Salary column will be encrypted using deterministic algorithm. An auto generated new Column Encryption Key will be created (CEK_Auto1)
  • Click next to create the Column Master Key. Select the auto generated option and save it to Windows Certificate Store in Local Machine.
  • Proceed next to continue
  • The wizard will execute the scripts.
  • Finally press close when it is done.
  • That’s all now we have things ready.
  • Running a select statement will show encrypted value for the rows inserted earlier.
  • The CEK and CMK can be found under Always Encrypted Keys Option of HR DB
  • The CMK certificate can be found at Windows Certificate Store of local machine. Run certmgr.msc and select Certificates under Intermediate Certification Authorities. The Always Encryption Auto Certificate1 can be found there.
  • You need to install this certificate to the client machine from where your application runs. You can export this certificate by right clicking on the certificate name and select export. The certificate can be imported by just double clicking on any client machine.

Please ensure that the encryption wizard runs on some other machine than the DB server. Because the DBA will have complete access to the DB server and if the key is available on the DB server the DBA can easily have a look to the decrypted value.

Well, so far we are done with creation of the KEY and certificates. Now we want to have a look our original data. If you have the certificate installed on a machine and there you have a SQL Management Studio you can easily view the data by adding additional connection parameter.


Enable the client application to read and write the Encrypted Object

As we’ve already seen that adding the additional connection parameter allowing us to view the encrypted data where the certificate is installed, similarly for client application we just need to add the same string to the connection properties.

If you are running a MVC web application on a Windows 2012 R2 server. You need to install the certificates on this server before you change the web.config.

Controlling the performance of Data Layer

Controlling the performance of data access layer is a major fact to consider while encrypting your data. Here are some rule of thumb.

When most of the queries access encrypted column:

  • Enable the encryption at connection string
  • SqlCommandColumnEncryptionSetting.Disabled for query that do not access encrypted column
  • SqlCommandColumnEncryptionSetting.ResultSet that do not have any parameter requiring encryption but retrieve encrypted column

When most of the queries do not need to access encrypted column:

  • Disable the encryption at connection string
  • SqlCommandColumnEncryptionSetting.Enabled for query that have encrypted parameters
  • SqlCommandColumnEncryptionSetting.ResultSet that do not have any parameter requiring encryption but retrieve encrypted column


There are some data types as well as column type which are not supported to be a candidate for always encryption:

  •  XML
  • timestamp/ rowversion
  • image
  • ntext/ text
  • sql_variant
  • hierarchyid
  • geography/ geometry
  • User defined type
  • Non Binary2 Collation string data type
  • Alias
  • Sparse column set
  • Partitioning columns
  • Columns with default constraints/ check constraints
  • Referencing column can’t be encrypted with randomized option (for deterministic option the CEK must be the same)
  • Columns that are keys of full text indices
  • Columns referenced by computed columns when the expression does unsupported operations
  • Columns referenced by statistics
  • Table variable columns

Clause that can’t be used:


Features that are not supported:

  • Transactional or Merge Replication
  • Distributed Queries (linked servers)


There are other things like key rotation, migrating from legacy system especially for Entity Framework and many more. Here are some useful links which can be a good candidate for starting up.

Always Encrypted (Database Engine):



Always Encrypted (Client Development):



Column Master Key Rotation and Cleanup with Always Encrypted:


Import/Export Windows Cert:


Happy Encrypting :)