kubernetes

Use KIND (Kubernetes in Docker) in CI/CD reliably

I’ve been working with OPA recently and using KIND to test things out. This works really nicely but when I started using the same approach in CI I saw some errors.

Digging into things you can see that the nodes of the KIND cluster aren’t “READY” when the CLI finishes up so you need a bit of extra bash foo to make the process wait on the READY status.

This monster line of bash does the trick:

JSONPATH='{range .items[*]}{@.metadata.name}:{range @.status.conditions[*]}{@.type}={@.status};{end}{end}'; until kubectl get nodes -o jsonpath="$JSONPATH" 2>&1 | grep -q "Ready=True"; do sleep 5; echo "--------> waiting for cluster node to be available"; done

In this example I’m also deploying a K8s operator this needs be to up and running before I can run the integration tests , a similar bit of bash ensures that’s true too:

JSONPATH='{range .items[*]}{@.metadata.name}:{range @.status.conditions[*]}{@.type}={@.status};{end}{end}'; until kubectl -n opa -lapp=opa get pods -o jsonpath="$JSONPATH" 2>&1 | grep -q "Ready=True"; do sleep 5;echo "--------> waiting for operator to be available"; kubectl get pods -n opa; done

If you put those all together I can have a nice make file which:

  1. Deploys a KIND clsuter
  2. Wait for it to work
  3. Deploys Open Policy Agent
  4. Waits for it to be running
  5. Runs my python integration tests

All by running make kind-integration 🙂

Full file:
KIND_CLUSTER_NAME ?= "opa-rego-teammapping"
WAIT_FOR_KIND_READY = '{range .items[*]}{@.metadata.name}:{range @.status.conditions[*]}{@.type}={@.status};{end}{end}'
WAIT_FOR_OPA_READY = '{range .items[*]}{@.metadata.name}:{range @.status.conditions[*]}{@.type}={@.status};{end}{end}'
.PHONY: all build test integration
all: test kind-integration
build: test
test:
opa test ./*.rego -v –explain full
kind-start:
ifeq (1, $(shell kind get clusters | grep ${KIND_CLUSTER_NAME} | wc -l))
@echo "Cluster already exists – deleting it to start from clean cluster"
kind delete cluster –name ${KIND_CLUSTER_NAME}
endif
@echo "Creating Cluster"
kind create cluster –name ${KIND_CLUSTER_NAME} –image=kindest/node:v1.16.2
until kubectl get nodes -o jsonpath="${WAIT_FOR_KIND_READY}" 2>&1 | grep -q "Ready=True"; do sleep 5; echo "——–> waiting for cluster node to be available"; done
kind-deploy:
./integration/deploy/deploy.sh
# Wait for OPA to be ready
until kubectl -n opa -lapp=opa get pods -o jsonpath="${WAIT_FOR_OPA_READY}" 2>&1 | grep -q "Ready=True"; do sleep 5;echo "——–> waiting for operator to be available"; kubectl get pods -n opa; done
kind-integration: kind-start kind-deploy integration
integration:
# Run integration tests
python3 ./integration/test/int_test.py
view raw Makefile hosted with ❤ by GitHub
Standard
Azure, How to, kubernetes

Kubernetes Integration Testing: MiniKube + Azure Pipelines = Happy

Update: With the release of KIND (Kubernetes in Docker) I’ve now moved to using this over minikube as it’s quicker and simpler.

I recently did some work on a fairly simple controller to run inside Kubernetes. It connects to the K8s API and watches for changes to ingress objects in the cluster.

I had a nice cluster spun up for testing which I could tweak and poke then observe the results. This was nice BUT I wanted to translate it into something that ran as part of my CI process to make it more repeatable. Having not played much with the new Azure Pipelines I decided to try and get this working using one.

Here was the goal:

    • Build the source for the controller
    • Spin up a Kuberentes cluster
    • Deploy test resources (Ingress and Services) into the cluster
    • Connect the controller code to the cluster and run it’s tests

The obvious choice was to look at creating the clusters inside a cloud provider and using it for testing but I wanted each PR/Branch to be validated independently in a separate cluster, ideally in parallel, so things get complicated and expensive if we go down that route.

Instead I worked with MiniKube which has a ‘no vm mode’, this spins up a whole cluster using just docker containers. The theory was, if the CI supports running docker containers it should support MiniKube clusters…

TLDR: Yes this is possible with MiniKube and Azure Pipelines or Travis CI – Skip to the end to see how.

Continue reading

Standard