
Why We Started Using Kubernetes
Our journey really started about 4 years ago, shortly before Kubernetes was officially released (in July 2015). Truthfully, at that time we didn’t even know it existed. As a company we’d been working on more and more complicated web applications most of which couldn’t simply be chucked into a typical hosting account and left to run. We had plenty of experience with server management so many of these applications ended up being deployed on their own infrastructure for simplicity. We doubled down on Docker and began heavily relying on it to provide us with reliable environments for local development that we could easily replicate in the cloud. And life was great.
The problems came the more times we did this. Suddenly we weren’t just managing a couple of applications across a couple of servers. We had to devote a significant amount of time and effort into setting up new infrastructure for these applications and every time we did it became another thing we’d have to keep tabs on – possibly for years!
So what if we could simplify these deployments? We already had all the isolation and configuration we needed with Docker, we just needed somewhere to run it. Kubernetes gives us precisely that.
First Things First: Save Yourself Some Heartbreak
Over the past few months we’ve learned some hard lessons. In hindsight, we’re glad we did but at times it was far from the joyful experience that we started off with. With that in mind, these are the key takeaways…
Resources: Know Your Limits
Kubernetes isn’t witchcraft. If you know that your web application needs 1GB RAM to run properly then make sure that Kubernetes knows that too. Every Docker container ultimately runs on a server (or node) and that server has limits. Once Kubernetes knows the resource quota for your application it can be sure to place it on a node that actually has sufficient resources to serve it. Trust me – if you deploy a handful of applications to your cluster without doing this and one of your nodes runs out of resources then you’re about to have a very bad day.
You can read more about resource requests and limits here.
Persistent Storage: Not All PVCs Are Created Equal
Since containers are volatile, Kubernetes gives you the ability to create persistent volume claims which won’t be trashed at the drop of a hat. Unfortunately, there are many types of persistent volumes and most of them can only be attached to a single pod at a time. This makes them useless if you’re trying to run multiple replicas of the same application.
Our solution to this was to run a network file server that can be mounted simultaneously by multiple other pods. There are a handful of ways to go about this (even within Kubernetes) but a simple NFS volume can save a lot of frustration here.
You can read more about the types of Persistent Storage here
We’ve also used this example as a guide to successfully set up an NFS share within Kubernetes.
Rollout Strategies: Sometimes It’s Easier To Recreate
Sometimes Kubernetes ends up in a deadlock with itself. You’ve tried to update a deployment (or a pod has been rescheduled for whatever reason) and Kubernetes wants to create a new pod before terminating the old one. Unfortunately, if a resource (like a PVC) is attached to the existing pod and required by the new pod, this chain of events never comes to be.
In this scenario, it’s sometimes acceptable to set the rollout strategy for that deployment to ‘Recreate’. Doing this forces Kubernetes to terminate any existing pods first (and free up anything attached) before attempting to create any new pods. This is especially useful with PVCs that have a ‘ReadWriteOnce’ access mode.
You can read more about the ‘Recreate’ rollout strategy here.
Databases: Stay Out Of This
This is purely my opinion. Managing databases inside of Kubernetes is a major pain in the ass and not worth the headaches. For many of the same reasons that PVCs are difficult to manage, we usually opt for a separate, properly configured database cluster here. That said, if you do need to run a database inside of Kubernetes then our advice on Persistent Storage and Rollout Strategies (above) could at least help you overcome some of these obstacles.