Last updated on May 2, 2020
We ❤️ containerization. We ❤️ docker too but we left Docker Swarm behind a while ago for K8S – a story for another time – so I’m a little hesitant to praise Docker too much when what I really like is having an convenient packaging mechanism.
As a developer joining Amazon from an investment bank, I was amazed at the deployment infrastructure. The idea that deploying a change to production was as simple as merging a pull request for almost every new project was incredible. Previously that had been the stuff of numerous Jenkins jobs, imperative bash scripts and suspicion. The weekly deploy on my team at unnamed bank had essentially been the full-time job of a teammate when factoring in the numerous (pointless) sign-offs required.
When I left to co-found Happy Valley, I was supremely bummed out to discover that Amazon’s internal offerings in this regard were well ahead of anything I could really find in the open. We would have loved to use a PaaS like Heroku, but our first client had a large legacy application with a chunky internal component that we didn’t want to host in a public way. If I recall correctly, GCP had just launched their managed K8S but I was leery of being locked in to GCP, so self-managed VM’s it was.
Fast forward a few years and things are a little different. Digital Ocean (DigitalOcean – The developer cloud) launched their managed Kubernetes offering last year, and Gitlab started pushing their AutoDevOps product offering to deploy things to K8S by magic. DigitalOcean is a great fit for us as a small team, given how simple it is to manage, so we decided to jump in feet first and try out their K8S offering.
Unfortunately, the magic of AutoDevOps kind of fizzled out in the first couple of hours. It didn’t really do what I wanted, broke in a pretty opaque way and was pretty underwhelming. It was like Harry doing magic, when we’d been hoping for Dumbledore. Nonetheless, I pulled the CI templates that they were using, modified them to suit and managed get a continuously deployed project going to DO’s K8S service in a morning.
This was what I’d been missing! Push a buggy change to production? Click the rollback in Gitlab’s environment UI. Notice a host is wigging out? Add a new node and kill the old one on DO’s dashboard. This is productivity! And the weird setup that stopped us from using Heroku before was a-okay with some fiddling in the Helm templates.
It was so easy to slip back to the old mindset of deployment is hard. That’s wrong! Deployment should be easy! I – like many developers – am fundamentally lazy. If you put a crappy deployment experience in my way, I will not rise to the occasion! I will deploy less frequently and come up with weird superstitions about when it should happen.
I am now far more aggressive in optimizing for developer workflow. I try to err on the side of visible features since I know that my natural inclination is to try to rewrite to the vogue language/framework/paradigm at any given moment. I think this experience has helped me to get a much better sense of when I’m optimizing for fun versus optimizing for a legitimate purpose.
I will no doubt have to learn this lesson again. Many times.
The thing I love about our current setup in particular is that a Docker image is now my artifact through most of our integration tests. By configuring our pipeline stages as a series of jobs in a
docker-compose file, we can package up our final production image immediately after our unit tests run.
The implications of this are wonderful. Worried that your web app bundling is up to something a little suspect? No need to worry, the Percy visual regression tests ran on the production image. You’re all good!
We’re currently using the free version of gitlab and love it. I’ll do a more technical write up of this in the future, but for now I wanted to sing the praises of this glorious future in which we’re working.