Automated resource creation in Azure

When creating single resources, e.g. a Virtual Machine or a Web App Service, it is very convenient to do it from Azure Portal. It is so convenient, that quite often we rush into creating a whole setup immediately from the portal. It is as soon as we need to copy the setup when we see that this does not scale. In this article, I will elaborate our current options for automated resource creation in Azure and their advantages and disadvantages based on my personal experiences.

When is automated resource creation necessary?

Before digging into technical details, let us first understand, when do we need to create resources automatically. This approach is my personal choice anytime I am setting up something that will stay longer than a day or two. In other words, if I am testing an idea, need a quick web app service or a storage, etc…, I go ahead and create it from the portal. If I am setting up a solution to deploy an application, then I know that eventually I will need to recreate that setup again. This could be that we will need to replicate the environment, e.g. create a staging environment. It could also be that we need to replicate the same setup in another region. In such scenarios, automating the resource creation will save you great deal of effort later.

Another reason to have resource creation automated is security. One has to be prepared for a catastrophic situation, e.g. your existing setup is not accessible anymore, then having it automated will enable you to recreate it fast and lowering your downtime.

Note: although this post is focused on options for Azure, equivalent alternative options exist for AWS Cloud as well.

Let us discuss what are our automation options?

ARM Templates

Azure Resource Manager (ARM) Templates is the Microsoft’s recommended way of automation. These templates essentially are JSON files with a “simple” structure. They can be written using any text editor, but there is particularly good support in VS Code editor for these templates.

Advantages

There are certain advantages in using ARM templates, the most important one in my opinion being the support given inside Azure. They can be very well integrated into Azure DevOps to automate infrastructure creation. I can also download the template of every manually created resource (mostly accurate), put it in source control and connect to a pipeline. Voila, from manual to automated resource creation in a few simple steps.

These templates can also be used to make sure that the infrastructure has not changed. A valid scenario is, if you run the infrastructure pipeline regularly, and if someone has changed a setting or removed a resource manually, the pipeline will correct the change and bring it to a desired state.

Also, its good integration with Azure KeyVault allows us to store secrets safely in the vault and access them easily from the pipeline, a particularly useful feature.

Disadvantages

There are certain disadvantages about ARM templates too. They are an Azure feature and this knowledge is not reusable in other cloud platforms. Also, writing them is not super easy (though this is improving every day with better support in VS Code).

In my opinion, I would also classify the documentation about ARM templates as a disadvantage. Microsoft is improving this continuously by providing more samples and tutorials, but at the time of this writing, if you need something beyond a simple setup, finding your way will not be super easy.

Terraform

Terraform is a cloud automation tool created by HashiCorp. Its support for Azure is pretty well and mature. Having multi cloud support also makes Terraform an appealing tool.

Advantages

One of the main advantages of Terraform is it having multi cloud support. Even if you are not using different cloud platform, you can reuse this knowledge later when you need to use another cloud provider. The documentation is pretty decent and you can find a lot of blogs and samples on the internet. Another very big advantage is possibility to preview my changes. I can preview all the potential changes my script is going to make if I run giving me a possibility to prevent an endangering operation. This gives me confidence before executing any scripts especially against productive environments

Another favourite feature of Terraform is the workspaces. Workspaces makes it super easy to manage different environments of the same infrastructure setup.

Disadvantages

Although terraform is advertised as a multi cloud solution, the abstraction of cloud providers are not at the desired level. Meaning, the building blocs of terraform files are cloud specific. When you create an Azure storage, it’s not the same code used to create an AWS S3 bucket.

Another big problem that can arise with Terraform is the Terraform state file. The state file contains the current state and if it gets corrupted, then one cannot execute any changes against the cloud environment. Therefore, it’s important to state it in a shared drive where all the users of the script have access to. Also, as the state file contains all the secrets applied, it itself becomes a secret you want to protect well.

Azure SDKs

Azure offers SDKs for different programming languages. They offer another programmatic alternative to automated resource creation in Azure. This could be very useful in some specific scenarios, e.g. if you want to create a custom one-stop-shop resource creation in your organisation where you can enforce your standards as well as hook resource creation to your specific organisational workflows and needs.

My experience so far in using Azure SDKs has been very limited, though I have not developed a very pleasant opinion about them. One big disadvantage that I have faced so far was that the SDK itself was not up to date with Azure. One such scenario was when creating an App Service, available options for selecting the tech stack and some other settings offered by SDK was not lacking the options available in the portal.

Conclusion

Overall I would say, we are lucky having many options we can choose from. I believe often which one one will choose depends on the circumstances, but I hope this list of advantages and disadvantages will help you shape your decision. In my case, I try to use Terraform whenever I can, if not then I usually fall back to ARM templates.

Kubernetes basic questions answered

when getting started with Kubernetes, it can be a daunting task at first to get a grasp of the basic concepts which will allow you to move forward with it. I would like to try to answer some basic questions about Kubernetes, some of which I was having in my mind when I first started to learn and work with Kubernetes.

If you do not have any understanding of containers and containerised applications, it becomes even harder to realise where in the big picture does Kubernetes fit.

Explaining what containers are is out of the scope of this article. If you feel like you still need clarity in understanding what containers are and how do they help, I suggest to find that out first before moving forward. If you feel comfortable with containers, then off we go.

What is Kubernetes?

Kubernetes is a container orchestrator. And what would that be? Well, when we containerise an application, we package the application and its environment in an image, but we still need somehow to run it. We can execute a docker run command and run a docker container, but then when we have an update or a new image, we would manually need to kill the running container and run the new one. And what happens when the container crashes because of e.g. unhealthy state? Who would take care of restarting the container? This is where Kubernetes fits into the big picture.

Kubernetes orchestrates the lifecycle of a container. It can deploy containers together with their dependencies, restart them if they crash, update them if the image version changes, create new instances of the image without downtime, etc. Most of these can be fairly easily automated with the help of Kubernetes.

What is a Pod?

Well, in the previous paragraph I made a false statement. Kubernetes doesn’t actually focus a lot directly on the containers. Kubernetes works on a one level higher abstraction concept called Pods. Pods can be thought of as mini virtual machines which can run multiple containers inside. Usually, Docker is used as a container engine, but this can be configured differently if needed.

Ideally, a pod would run a main container (e.g. one application or a service), and it can run other side containers which would serve the main container. The reason behind this is that, if one of the containers signals to be unhealthy, Kubernetes will kill the whole Pod and try to create a new one. Therefore, it’s a good practice to have one main service running in a Pod, and if that service is not healthy, then the Pod is instatiated by the orchestration scheduler.

What applications can I run on Kubernetes?

Anything that can be containerised. Kubernetes supports stateless as well as stateful applications. Although, from experience I can say, running stateless applications is easier. That’s because managing the state requires more management work from our side.

Personally, I try to push stateful software outside Kubernetes and use them from PaaS providers. One example of such a scenario is the Database. This leaves me more room to focus on running the in-house developed applications and less attention on dependencies.

What is kubectl?

Kubectl is a CLI tool to query and manage kubernetes. Kubernetes has several types of resources. Those resources can be Pods, Services, Deployments, ConfigMaps, etc. Kubectl allows us easily to find information about those resource as well as change them. One example would be to read the deployment configuration of a pod, another would be scaling up a deployment.

One can get most (if not all) of these using a UI, but come on, who needs a UI nowadays ☺️.

I want to have a Kubernetes cluster, what are my options?

Starting from the most obvious option, you can get some bare metal servers and install your own Kubernetes cluster. Though, I would strongly not recommend this until you really know what you are doing. Kubernetes is a very complex system. It has several components and a good configuration would require several servers. Only keeping a safe, available and up to date configuration would be a challenge, let alone taking care of more complex topics like the security of the cluster.

Unless you are constrained here, I would strongly recommend you start with one of the cloud providers that provide Kubernetes as a service. It is offered by many providers, amongst them Azure, AWS, and DigitalOcean.

The cloud providers abstract away the management of the cluster itself and give you freedom to focus on actually building your application infrastructure.

When is Kubernetes good for me?

If you have only one or two applications running, you are better off without it. Kubernetes offers great functionality to orchestrate containers, but it also comes with an administration overhead. If you are not building many (3+) different applications or micro services that you deploy frequently (several times per month), in my opinion it would not be a good option.

Kubernetes is a great helper in an environment of multiple micro services where continuous delivery is the process. It is an overkill to run 2-3 applications which get deployed a couple of times per month. You get my point.

Start small and adjust as you grow!

Conclusion

Kubernetes is one of our time’s coolest tools. It has enabled many business solutions scale flexibly and shine. But at the same time, it can be a complex beast. Take it with a grain of salt and prepare well before adopting. Equipped with knowledge, it will take your DevOps processes and with it your possibility to reacting to changes to a whole new level.

Setup your deployment using Dokku

Production deployment environments come in all sorts of variations nowadays. The configuration architecture is mainly influenced by the size of the application and the budget but also about the process flow and how easy it is to do the deployment. Quite often, at the beginning phase of application/business, it is quite reasonable that you might not need a full-fledged cloud deployment setup. This post assumes that you have small infrastructure requirements but you want a smooth deployment process.For a simpler scenario when you need a single server or few of them, Dokku does a great job by making it possible to push code to production by a simple git push.

Dokku is a Heroku on your own infrastructure. It allows easy management of the deployment workflow of your application by managing applications in docker containers. It also uses an Nginx reverse proxy but as its configuration is managed by dokku itself, you barely notice it. After you deploy your application, you can scale your application up and down with a single command. Through Dokku plugins, you can also create and manage database instances (e.g. Postgres or MySQL), schedule automatic backups to AWS S3, create redis instances and configure HTTPS using Let’s Encrypt.

Some cloud infrastructure providers like Digital Ocean offer a ready-made server image of Dokku which you can instantiate in minutes and start using it right away. In this post, I will go through the process of installing and setting up your own Dokku environment. For this writing, I’ll suppose you have an Ubuntu Server with ssh enabled already in place. For your information, I have a virtual machine (VM) on VirtualBox running Ubuntu 16.04, but you can do the same on any instance of Ubuntu running anywhere. Off we go…

Step 1: Installation of dokku

To do the installation, we follow these commands:

wget https://raw.githubusercontent.com/dokku/dokku/v0.11.4/bootstrap.sh

sudo DOKKU_TAG=v0.11.4 bash bootstrap.sh

The first line will download the installation script and the second line will actually do the installation. Downloading the script will be fast, but the installation itself will take few minutes (depending on the performance of your server). Please check the this page to find out the most recent version.

dokku setup page

Here in the “Public Key” field, we should paste our public key (mine is at ~/.ssh/id_rsa.pub) and then click Finish Setup button. As of this step, our dokku is set and ready.

Step 3: Creating and deploying an application

Now that we have dokku ready, we want to deploy our first app. Setting up the application is a one-time thing which we need to do from the server’s console. After we are connected to our server using ssh, we can create and configure the application in two steps:

    • Run dokku apps:create name_of_app to create the app. We can verify that the app is created by running dokku apps:list
    • Run dokku domains:add hello hello-world-app.com to configure the domain of the app.

In order to serve the app, nginx needs to have a domain configured for the app. If you have a domain to configure, go for it. For this post I will use a fake domain which I will map in my /etc/hosts file to the IP address of my VM. So for this example, I will use hello-wold-app.com domain.

Next, we need to set the git remote for the dokku environment. We do this by running git remote add remote_name dokku@dokku_server:name_of_app. Here,

  • remote_name is the name we want to give to our deployment environment, this could be anything you like e.g. production, staging, etc.
  • dokku_server is the IP address or the URL of the dokku server we just configured
  • name_of_app is the name we specified when we created the app.

Now that we have it ready, we can deploy our app by running git push dokku_server master. When the deployment is finished, you can test you application by visiting http:://hello-world-app.com

p.s. if you don’t have a ready application to deploy, go ahead and download a hello world web application from my github account.

 

Step 4 (optional): Scaling your application

Now that we have our web application deployed, it is running in a single instance of a docker container. For a production deployment we probably would want to scale it to more instances to have a better performance. As an optimal configuration, probably we would want to scale the app to as many instances as the number of processor cores we have. Assuming we have an 8 core processor in our server, we could scale our app to 8 instances by running

dokku ps:scale name_of_app web=8

after that we can check the instances by running dokku ls either  or sudo docker ps.

Conclusion

When setting up new applications, I try to take the pragmatic approach and keep it as simple as possible. In my opinion, dokku is a good starting point. It makes the deployment process dead simple as well as gives us the flexibility to scale it as we need. As the application starts facing a lot of traffic and this infrastructure starts to have a hard time coping with it, then I think about more advanced deployment workflows.

If you have followed the steps and tried your own installation, I’d like to hear about your experience. Please post a comment and share it with us.

Hope it helped!