Coding, How to, kubernetes

How to build Kubernetes from source and test in Kind with VSCode & devcontainers

I found myself last week looking at a bit of code in K8s which I thought I could make better, so I set about trying to understand how to clone, change and test it.

Luckily K8s has some good docs, trust these over me as they’re a great guide. This blog is more of a brain dump of how I got on trying with Devcontainers and VSCode. This is my first try at this so I’ve likely got lots of things wrong.

Roughly I knew what I needed as I’d heard about:

  1. Bazel for the Kubernetes build
  2. Kind to run a cluster locally
  3. Kubetest for running end2end tests

As the K8s build and testing cycle can use up quite a bit of machine power I didn’t want be doing all this on my laptop and ideally I wanted to capture all the setup in a nice repeatable way.

Enter DevContainers for VSCode, I’ve got them setup on my laptop to actually run on a meaty server for me and I can use them to capture all the setup requirements for building K8s.

Continue reading
Standard
kubernetes

Use KIND (Kubernetes in Docker) in CI/CD reliably

I’ve been working with OPA recently and using KIND to test things out. This works really nicely but when I started using the same approach in CI I saw some errors.

Digging into things you can see that the nodes of the KIND cluster aren’t “READY” when the CLI finishes up so you need a bit of extra bash foo to make the process wait on the READY status.

This monster line of bash does the trick:

JSONPATH='{range .items[*]}{@.metadata.name}:{range @.status.conditions[*]}{@.type}={@.status};{end}{end}'; until kubectl get nodes -o jsonpath="$JSONPATH" 2>&1 | grep -q "Ready=True"; do sleep 5; echo "--------> waiting for cluster node to be available"; done

In this example I’m also deploying a K8s operator this needs be to up and running before I can run the integration tests , a similar bit of bash ensures that’s true too:

JSONPATH='{range .items[*]}{@.metadata.name}:{range @.status.conditions[*]}{@.type}={@.status};{end}{end}'; until kubectl -n opa -lapp=opa get pods -o jsonpath="$JSONPATH" 2>&1 | grep -q "Ready=True"; do sleep 5;echo "--------> waiting for operator to be available"; kubectl get pods -n opa; done

If you put those all together I can have a nice make file which:

  1. Deploys a KIND clsuter
  2. Wait for it to work
  3. Deploys Open Policy Agent
  4. Waits for it to be running
  5. Runs my python integration tests

All by running make kind-integration 🙂

Full file:
KIND_CLUSTER_NAME ?= "opa-rego-teammapping"
WAIT_FOR_KIND_READY = '{range .items[*]}{@.metadata.name}:{range @.status.conditions[*]}{@.type}={@.status};{end}{end}'
WAIT_FOR_OPA_READY = '{range .items[*]}{@.metadata.name}:{range @.status.conditions[*]}{@.type}={@.status};{end}{end}'
.PHONY: all build test integration
all: test kind-integration
build: test
test:
opa test ./*.rego -v –explain full
kind-start:
ifeq (1, $(shell kind get clusters | grep ${KIND_CLUSTER_NAME} | wc -l))
@echo "Cluster already exists – deleting it to start from clean cluster"
kind delete cluster –name ${KIND_CLUSTER_NAME}
endif
@echo "Creating Cluster"
kind create cluster –name ${KIND_CLUSTER_NAME} –image=kindest/node:v1.16.2
until kubectl get nodes -o jsonpath="${WAIT_FOR_KIND_READY}" 2>&1 | grep -q "Ready=True"; do sleep 5; echo "——–> waiting for cluster node to be available"; done
kind-deploy:
./integration/deploy/deploy.sh
# Wait for OPA to be ready
until kubectl -n opa -lapp=opa get pods -o jsonpath="${WAIT_FOR_OPA_READY}" 2>&1 | grep -q "Ready=True"; do sleep 5;echo "——–> waiting for operator to be available"; kubectl get pods -n opa; done
kind-integration: kind-start kind-deploy integration
integration:
# Run integration tests
python3 ./integration/test/int_test.py
view raw Makefile hosted with ❤ by GitHub
Standard
Apps, kubernetes

Mutating Admissions Controllers with Open Policy Agent and Rego

First up, quick refresher – what is a mutating admission controller?

Well it’s a nice feature in Kubernetes which lets you intercept objects when they’re created and make changes to them before they are deployed into the cluster.

Cool right? All those fiddly bits of YAML or hard to enforce company policies around network access, image stores you can and can’t use, they can all be enforced and FIXED automagically! (Like all magic caution is advised, choose wisely – queue Monty python gif)

giphy

So what’s the catch? Well without Open Policy Agent (OPA) you had to build out a web api to do the magic of changing the object then build/push an image and go through maintaining the solution. While you can write them quite easily now with solutions like KubeBuilder, or if you really love node I build one using that too, I wanted to see if OPA made things easier.

So say you want something more dynamic, flexible and a little easier to look after?

This is where Open Policy Agent comes in, they have a DSL language specially designed to build out and enforce complex policies.

Today I’ve been having a play with it to work out if I could build a controller which would set a certain nodeSelector on pods based on which namespace they are deployed in.

I’ll go over this very broadly I highly recommend looking at the docs in detail before diving in, I lost quite a bit of time to not reading things properly before starting.

I won’t lie, getting used to the DSL (rego) was painful for me, mainly because I came at it thinking it was going to be really like Golang. It does look quite like it but that’s where the similarity ends, it’s more functional/pattern matching and better suited to tersely making decisions based on data.

To counter the learning curve of rego I have to say, as I’ve raised issues and contributions the maintainers have been super responsive and helpful (even when I’ve made some silly mistakes) and the docs are great with runnable samples to get started.

Lets talk more about what I built out.

Continue reading

Standard