How to preview deployments with Kubernetes
Sometimes replicating your full stack environment locally is pretty hard. Orchestrating lots of microservices in a development environment and sharing demos can be tricky.
Making incremental changes and shipping fast is ideal. This is our default workflow for the vast majority of development but for some cases where we want to preview a change, this isn't the easiest workflow.
When a logical change affects multiple systems, you may have to consider rolling out in a way that is non-disruptive. Strategies like feature flags, feature toggles, and extra configuration are a burden that can slow experimentation. Especially if the change is meant to be throwaway code.
What if you could generate a custom staging environment for any pull request? At Jam we’ve built this feature into our CI pipeline as an optional setting! It is a dog-fooding superpower that lets us build, test and QA new products that may or may not involve lots of separate systems. This ability has been really useful for us, allowing us to ship changes fast, and experiment with code on a larger scale.
In this blog post, I will share our strategy for making this possible.
Development flow
What is the typical request lifecycle?
- Engineer writes code
- Engineer pushes code as a change request into favorite version control system
- Peers review and code is merged
- Code makes its way to staging
- Staging makes its way to Production
Basically: write the code, merge it, and release it. Pretty simple and commonly practiced in most organizations.
Stakeholders like product managers, designers, and QA have to wait for a staging release. Depending on this cadence it could be frictionless or it could take hours or days. Ideally change sets are small, focused and low impact. Merging to staging should be easy, right?
In most cases, this is true but it doesn’t always pan out. Sometimes you want to experiment and try a change that affects many disparate systems. Maybe you are working on a brand new feature but it isn’t quite ready for launch. Maybe you don’t want to deal with feature flagging across the codebase.
We use preview environments as another tool for collaboration. You can share screenshots and videos, or even opening a local tunnel with tools like ngrok or Cloudflare Tunnels but having someone visit your preview environment is super convenient. This is especially important for us because we are a fully distributed team spanning multiple timezones. We can't always leave our computers on with open tunnels for our teammates.
Jam Previews
We use GitHub and GitHub actions for our CI. It’s super simple, just make a pull request and add a label to it. If you tag your pull request with the ci-preview-deploy
label, we will spin up the universe for you.
Our Github action workflow will check for this label and execute extra steps we’ll cover later on, but the end result is a comment including links to the dashboard, extension artifacts and even a Storybook deployment. Remember, that all of these components are isolated in their own unique environment.
As soon as the pull request is closed, we destroy everything. It's all deployed in Kubernetes under a temporary namespace.
How it works
The relevant deployment steps may vary based on your setup but there are essentially three key ingredients in making this work.
1/ Identify your preview branch. We need a way to identify our preview environment. We generate a slug which is used as a prefix for all the service hostnames.
Since we're using GitHub actions, we can easily reference the pull request number. The slug format that we use is pr-{{ pull request number }} like pr-5513.
2/ Override hostnames: Since we're using Kubernetes and Helm to manage our releases, we have all this easily configured and we just override all the hostnames using that slug identifier.
Note that we run a separate helm upgrade with prev-values.yaml which has lighter resource requirements. We don’t need as many replicas, and we request less CPU and memory for these temporary services.
We also target a separate Kubernetes cluster and namespace for isolation.
The only part that varies between pull requests are the hostnames and some pubsub topics, so we override those values using the helm upgrade –set flag.
3/ Networking with temporary namespace:
We're big fans of Cloudflare and use their managed DNS. We have a wildcard A record that points to our Kubernetes ingress IP address.
At a high level, this wildcard points to the Kubernetes ingress controller, the ingress specifies which virtual hostnames map to services, and services are pointed to the deployments.
For our frontend single page application, we use Cloudflare Pages to handle deployments. These are hosted on a separate Cloudflare hosted domain *.pages.dev. This is slightly inconvenient because we prefer just having one domain jam.dev
to work off of.
To make things a little bit easier for us we basically set up a virtual hostname to proxy our frontend to the Cloudflare Pages domain. We use Cloudflare Workers, which handles rewriting the Host header.
Incoming requests looks like pr-5432.jam.dev
but we proxy to a dynamically generated hostname pr-5432.jam-frontend.pages.dev
.
If you don't use Cloudflare Workers, you could use something else that sits in front of your front-end server, like Nginx for reverse proxying - and voila!
Recap
To summarize, the above setup gives us a temporary, unique set of URLs on our own domain name. The URLs are prefixed with the pull request number and are super easy to share!
We use Kubernetes which really does the heavy lifting. The declarative nature of it means we can spawn and dispose of environments easily.
Though whatever toolchain you use, should be able to accommodate this pattern. It’s basically just running the deploy script with a couple of parameters tweaked.