Awesome FOSS Logo
Discover awesome open source software
Launched 🚀🧑‍🚀

Using Makefiles And Envsubst As An Alternative To Helm And Ksonnet

Categories
Kubernetes logo + GNU Make logo

tl;dr - Why don’t we use Makefiles in <project>-infra repos, git-crypt, and good naming conventions instead of Helm

UPDATE (06/13/2018)

After some much needed prodding from some readers that sent emails, I've created an example repo to more fully showcase the pattern!
You can find the example repo (`mrman/makeinfra-pattern`) on Gitlab. Check it out and make Merge Requests with any suggestions, discussion, and improvements you can think of!


UPDATE (06/26/2021)

3 years later Helm 3 has long been out, Tiller is gone, and Helm looks a lot more usable, so take this post with a grain of salt! I still use Makefiles + kustomize though, which I've written about in follow ups to this post.


tl;dr - Why don’t we use Makefiles in <project>-infra repos, git-crypt, and good naming conventions instead of Helm

From the beginning of my time using Kubernetes, I’ve preached/emphasized a rigorous study of the documentation (at least initially), along with writing your own resource definitions (the YAML files that represent the Kubernetes concepts). After a while, however, I have grown to want some automation and more intelligence in generating/applying resource definitions to my cluster.

Most people seem to be using the combination of Helm to manage application of resource definitions (providing some variable substitution templating features along the way) and KSonnet for involved/advanced templating. Both of these teams are doing great work, providing something that they believe the community needs, and I want to stress that I don’t mean to denigrate their work when I express my opinion: I don’t think people should be encouraging the use of tools like Helm and KSonnet, especially to newcomers to Kubernetes and the ecosystem in general. the cognitive load of understanding Kubernetes is high enough without needing to understand Helm (+ Tiller) and KSonnet (even though KSonnet is very simple).

To restate my thesis, I think it’s too early to abstract users (who should be ops people) from Kubernetes the platform. Kubernetes is meant to be an unseen, boring piece of your infrastructure, but it’s too early to assume that the vast majority are ready to abstract it yet. Especially early in a product’s lifecycle, I think the relatively early adopters should be forced to understand how it works on a deeper level – this may look like it slows adoption, but it’s these users that will be around to help newcomers when the project really starts to gain traction. Things will inevitably go wrong, and there are already a lot of places to look when Kubernetes is involved… Let’s not add one more so soon.

I’ve resisted making a post like this up until now mostly because I didn’t have any solutions I could offer, and even when I had, hadn’t given them a try on my own infrastructure first to see how well they worked. That’s changed recently, so I’d like to introduce you to my idea on how to handle higher-level automation of ops – the MakeInfra/MakeOps pattern (If anyone can think of a better name please let me know).

What is the MakeInfra/MakeOps pattern

Catchy naming attempt aside, the core ideas I want to put forward are simple:

  • For every <project> repo, make a corresponding <project>-infra repository - this becomes composable by using git submodule for something like <team>-infra, or even a monorepo for infra
  • Name resource definitions with descriptive suffixes - i.e. jaeger.deployment.yaml, monitoring.ns.yaml
  • Use a Makefile with .PHONY targets in the folder with your resource definitions
  • Use git-crypt to store credentials inside the -infra repo - for testing, staging, and maybe even production, right in the repo.
  • Suffix files that must be transformed with .pre - This makes it pretty clear to anyone which files are ready and which are not – you can also use whatever tool you want (KSonnet, envsubst, m4, python) for replacement.
  • Use the tools that make sense for your team - just like you would for any other Makefile powered project (so technically, you can use helm, or jinja2, or raw perl, or ksonnet whatever else)

I think this idea is more powerful than tools helm because it’s a superset of the ideas behind helm (as in you can use helm very easily with this approach), and it decomposes well enough to offer a subset of the features offered by a tool like helm when you’re starting out.

When you want to ensure the project is up, if your local kubectl is already properly configured, you only need to clone the repo, enter, and run make (or optionally make deploy or some other more specific target)!

Case Study: Deploying the Jaeger all-in-one

Jaeger (a CNCF project) is a program that provides OpenTracing compliant tracing to applications in your cluster. I’ve recently written about getting Jaeger set up on my cluster, and here’s what the

Overally Folder structure (provided by tree):

monitoring/
├── elastic-search
│   ├── elastic-search.configmap.yaml
│   ├── elastic-search.configmap.yml
│   ├── elastic-search.pvc.yaml
│   ├── elastic-search.serviceaccount.yaml
│   ├── elastic-search.statefulset.yaml
│   ├── elastic-search.svc.yaml
│   └── Makefile
├── fluentd
│   ├── fluentd.configmap.yaml
│   ├── fluentd.daemonset.yaml
│   ├── fluentd.serviceaccount.yaml
│   └── Makefile
├── grafana
│   ├── grafana.deployment.yaml
│   ├── grafana.svc.yaml
│   └── Makefile
├── jaeger # <--- Here's jaeger
│   ├── jaeger.deployment.yaml
│   ├── jaeger.svc.yaml
│   └── Makefile

jaeger.deployment.yaml:

---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: jaeger
  namespace: monitoring
  labels:
    app: jaeger
spec:
  replicas: 1
  selector:
    matchLabels:
      app: jaeger
  template:
    metadata:
      labels:
        app: jaeger
    spec:
      containers:
      - name: jaeger
        image: jaegertracing/all-in-one:1.3.0
        imagePullPolicy: IfNotPresent
        resources:
          limits:
            cpu: 500m
            memory: 500Mi
          requests:
            cpu: 500m
            memory: 500Mi
        env:
        - name: COLLECTOR_ZIPKIN_HTTP_PORT
          value: "9411"

jaeger.svc.yaml:

---
apiVersion: v1
kind: Service
metadata:
  namespace: monitoring
  name: jaeger
  labels:
    app: jaeger
spec:
  selector:
    app: jaeger
  ports:
    - name: jaeger-agent-zipkin
      protocol: TCP
      port: 9411
    - name: jaeger-query
      protocol: TCP
      port: 16686

Makefile:

.PHONY: install svc deployment uninstall

install: svc deployment

KUBECTL := kubectl

check-tool-kubectl:
ifndef KUBECTL
  $(error "`kubectl` is not available please install kubectl (https://kubernetes.io/docs/tasks/tools/install-kubectl/)")
endif

svc: check-tool-kubectl
    $(KUBECTL) apply -f jaeger.svc.yaml

deployment: check-tool-kubectl
    $(KUBECTL) apply -f jaeger.deployment.yaml

uninstall: check-tool-kubectl
    $(KUBECTL) delete -f jaeger.svc.yaml
    $(KUBECTL) delete -f jaeger.deployment.yaml

I’m by no means an expert in using Make (I haven’t evne read the whole Make manual), but I find this pretty easy to read through and understand. Make can be pretty deep, so there is definitely a danger.

I haven’t included an example of a .pre prefixed file or a git-crypt encrypted secrets file, but it should be fairly obvious how those would fit in – I’m a fan of using envsubst so git-crypting a shell variable declaration file would be enough (along with running envsubst of course).

Deficiencies with this approach (as compared to a tool like Helm)

There is obviously a distinct lack of engineering with this approach, and while I think it’s “just enough”, there are some pretty obvious ways in which this is inferior to a Helm-based approach. This approach is clearly less immediately robust, but it is much simpler and more flexible than Helm, and with some careful, simple scripting, can be very powerful.

  1. Danger of hanging yourself (and your team) with the freedom Make and the wide-open solution space offers.
  2. It’s just a pattern, so there may be holy wars on how to further extend
  3. No versioning, which is an explicit feature of Helm (upgrade an “app” to a new version with all it’s dependencies)
  4. Just about every feature that Helm offers over simple variable replacement and applying resource definitions isn’t provided here.

Of course, these things can be worked around by creating a well-known/usable higher level patterns on top of the general one, but that’s more work than is necessary right now. Last I checked Helm’s feature list (related to #4 in the above list) wasn’t so extensive so maybe this approach makes more sense than it ought to (or will in the future).

Wrapping up

Hopefully you enjoyed the breakdown of this idea – I’d love to know if you have some more ideas for catchy names, or found some gaping holes in the plan that make it actually a terrible idea.