Containerizing the infrastructure
Posted Jul 22, 2018
If you’ve been keeping an eye on my GitHub repos lately — and let’s be honest, who hasn’t — you might have noticed that several of my projects have acquired Dockerfiles.
The obvious reason is that I’m playing around with Docker, and since I’m the only known user of these projects I don’t really have to think about unsexy things like “compatibility” or “stability”, only insofar as “it works on my machine”. As is usually the case, I’m mostly doing this for the fun of it. Learning new tech improves my marketable skills, and if I like what I’m learning, I’ll happily spend my evenings on it. So far, I’m pleased to say that Docker is probably one of those cases.
It doesn’t hurt that Docker is now popular enough to be a bullet point in many job descriptions. In fact, that’s what made me give it a serious look. My current employer has some fairly “traditional” deployment procedures. Even though we run a number of different applications, the production environment consists of a handful of perpetually running Windows servers with important machine-wide configuration. So they won’t let me work with Docker on company time, but at some point I might just leave for a place that will. I then realized that regardless, I have a Docker use case all of my own. Allow me to pull back the curtain and tell you about my setup, and then rant a bit about my experience with Docker.
I barely get any visitors to my sites. I’m also kind of a cheapskate. For now, I run everything I need on a single $10 Ubuntu instance at Linode and, as of recently, serve it behind Cloudflare. Everything I need is currently this:
- four static sites
- two ASP.NET Core apps
- a Node app with Express
- a Python app with Falcon
- a MariaDB instance
- a MongoDB instance
- a Let’s Encrypt client
- an nginx reverse proxy
My ultimate goal is abstracting the server away enough that my entire collection of apps and sites becomes nothing more than a bunch of containers. It would open the doors to some new options, like being able to scale up to meet demand or migrating to a new server with barely any setup (the amount of setup is why my server is still on 16.04). I’m definitely not at that point yet, but I feel like I’ve taken a big first step in that direction. All but the last two in the list above are now running on Docker, and I find that pretty cool.
In my opinion, Docker is a difficult ecosystem to get into, and that’s mostly down to the tools you need to know to get anything productive out of it. Learning Docker on its own is fine but it won’t really cut it for any production deployments. For now I’m using docker-compose to manage my containers in production, but that in turn won’t cut it for more advanced things like scaling or automatic deployments. Then you might need something like Kubernetes or Docker Swarm, which is a whole new can of worms that I don’t have the energy to open. Yet.
Once you start using Docker, your deployment process changes. The process for one of my apps used to be:
- Build the app in release mode
- Upload the build output to the server
- SSH into the server and restart the app
Very low-level and straightforward, right? Now, I’m deploying it like:
- Build an image of the app using a Dockerfile I previously created (which builds the app in release mode)
- Push the image to Docker Hub (for which you need an account)
- SSH into the server, pull the image and restart the container using docker-compose
It’s only slightly more work, but it involves a bunch of extra concepts that don’t affect the app itself. What’s an image? What’s a container? What’s a Dockerfile, Docker Hub, docker-compose? Well, you’ll just have to RTFM. One site that helped me understand the basics is Docker curriculum, although it does leave out some important issues when it comes to more realistic app deployments. The official Docker documentation is pretty solid too, but it’s more of a reference than a gentle introduction.
For some things you can’t just RTFM though, and to me it was mainly how do I provide configuration files to containers? I obviously don’t want my production secrets in a public image, but I do need to provide them. It seems the official recommendation is using environment variables for configuration, but as nice and cross-platform as they are, they’re not really useful beyond simple key-value structures. Also, I don’t want to make too many changes to my apps in the name of containerization. If one day I decide that Docker isn’t for me, I want an easy out; I don’t want my app tightly coupled to its deployment mechanism. What I ended up doing for this particular problem was storing private configuration next to the app’s docker-compose configuration and bind mounting it into the right place. I find it to be a natural place to put it since my docker-compose configurations are also generally private.
Perhaps my favorite feature of Docker is the Dockerfile, which is a recipe for building an app as a Docker image. It’s basically a sequence of repeatable, easy-to-read instructions that can be distributed along with the source code. It makes the app’s system requirements very explicit and at the same time allows anyone to build it without having to deal with them. Imagine in five years from now you need to figure out what global packages your old app needs. Instead of reverse-engineering your old server or trial-and-erroring until it sort of works, you can just read your Dockerfile and be done with it.
My Docker transformation is still a work in progress, and the main thing missing right now is getting nginx running on it. I imagine it will be the most cumbersome conversion since my nginx setup has a bunch of configuration and brings all my sites down if it misbehaves. But if a tree falls in the forest and no one is around to hear it…