Azure, How to, kubernetes

Kubernetes Integration Testing: MiniKube + Azure Pipelines = Happy

Update: With the release of KIND (Kubernetes in Docker) I’ve now moved to using this over minikube as it’s quicker and simpler.

I recently did some work on a fairly simple controller to run inside Kubernetes. It connects to the K8s API and watches for changes to ingress objects in the cluster.

I had a nice cluster spun up for testing which I could tweak and poke then observe the results. This was nice BUT I wanted to translate it into something that ran as part of my CI process to make it more repeatable. Having not played much with the new Azure Pipelines I decided to try and get this working using one.

Here was the goal:

    • Build the source for the controller
    • Spin up a Kuberentes cluster
    • Deploy test resources (Ingress and Services) into the cluster
    • Connect the controller code to the cluster and run it’s tests

The obvious choice was to look at creating the clusters inside a cloud provider and using it for testing but I wanted each PR/Branch to be validated independently in a separate cluster, ideally in parallel, so things get complicated and expensive if we go down that route.

Instead I worked with MiniKube which has a ‘no vm mode’, this spins up a whole cluster using just docker containers. The theory was, if the CI supports running docker containers it should support MiniKube clusters…

TLDR: Yes this is possible with MiniKube and Azure Pipelines or Travis CI – Skip to the end to see how.

Azure Pipelines offer ‘Ubuntu 16.04‘ as a base for builds so I set out building a script that would work against that. Luckily there is some prior work by the travis team which got me started. 

I reworked their .travis.yaml into a script file which could be used against, in theory, any Ubuntu 16 image. There are a few notable tweaks that I had to do here from the original Travis example:

  1. For some reason the file permissions for the ‘.kube’ and ‘.minikube’ folders misbehaved in Azure Pipelines so this is fixed up on like #18 and #19
  2. I pinned the version numbers of both ‘kubectl’ and ‘minikube’ on #12 and #14 to prevent the script breaking as changes are made to either tool. (Previously these took ‘latest’)
  3. As miniKube clusters are run locally they don’t understand how to deal with a ‘Service’ with ‘Type=LoadBalancer’. On line #36 I include a workaround for this by elsonrodriguez. This means I can test the same YAML I’ll be using in my real clusters, rather than having a separate YAML for MiniKube and Production.


#/bin/bash
set -e
# Adapted from: https://github.com/LiliC/travis-minikube/blob/minikube-26-kube-1.10/.travis.yml
export CHANGE_MINIKUBE_NONE_USER=true
echo "–> Downloading minikube"
# Make root mounted as rshared to fix kube-dns issues.
sudo mount –make-rshared /
# Download kubectl, which is a requirement for using minikube.
curl -Lo kubectl https://storage.googleapis.com/kubernetes-release/release/v1.12.0/bin/linux/amd64/kubectl && chmod +x kubectl && sudo mv kubectl /usr/local/bin/
# Download minikube.
curl -Lo minikube https://storage.googleapis.com/minikube/releases/v0.30.0/minikube-linux-amd64 && chmod +x minikube && sudo mv minikube /usr/local/bin/
echo "–> Starting minikube"
sudo minikube start –vm-driver=none –bootstrapper=kubeadm –kubernetes-version=v1.12.0
# Fix permissions issue in AzurePipelines
sudo chmod –recursive 777 $HOME/.minikube
sudo chmod –recursive 777 $HOME/.kube
# Fix the kubectl context, as it's often stale.
minikube update-context
echo "–> Waiting for cluster to be usable"
# Wait for Kubernetes to be up and ready.
JSONPATH='{range .items[*]}{@.metadata.name}:{range @.status.conditions[*]}{@.type}={@.status};{end}{end}'; until kubectl get nodes -o jsonpath="$JSONPATH" 2>&1 | grep -q "Ready=True"; do sleep 1; done
JSONPATH='{range .items[*]}{@.metadata.name}:{range @.status.conditions[*]}{@.type}={@.status};{end}{end}'; until kubectl -n kube-system get pods -lcomponent=kube-addon-manager -o jsonpath="$JSONPATH" 2>&1 | grep -q "Ready=True"; do sleep 1;echo "waiting for kube-addon-manager to be available"; kubectl get pods –all-namespaces; done
JSONPATH='{range .items[*]}{@.metadata.name}:{range @.status.conditions[*]}{@.type}={@.status};{end}{end}'; until kubectl -n kube-system get pods -lk8s-app=kube-dns -o jsonpath="$JSONPATH" 2>&1 | grep -q "Ready=True"; do sleep 1;echo "waiting for kube-dns to be available"; kubectl get pods –all-namespaces; done
echo "–> Get cluster details to check its running"
kubectl cluster-info
echo "–> Setup support for external IPs in LoadBalancer services"
# See workaround details here: https://github.com/elsonrodriguez/minikube-lb-patch
kubectl run minikube-lb-patch –replicas=1 –image=elsonrodriguez/minikube-lb-patch:0.1 –namespace=kube-system


#/bin/bash
set -e
cd "$(dirname "$0")"
cd ./testyaml
# Create test namespace with different configurations
ls . | xargs -n 1 kubectl apply -f

The second script loops through all the YAML files in the ‘testyaml’ folder and deploys them to the newly created cluster with ‘kubectl’. To test the controller against different setups I create several namespaces, one for each test case, and deploy test resources into the namespace. My tests then pick different namespaces and assert the behavior is correct. You can see this in the following code, the ‘name’ parameter is the namespace which each of the tests will run against.

https://gist.github.com/lawrencegripper/deff8dd8844fc397fe4b5eab5f91b46a 

Last but not least I need to run these scripts inside Azure Pipelines. I’m a big fan of ‘Configuration as Code’ so I used the YAML definition files rather than the UI editor. Quite a bit of this file is Golang specific build configuration, if your not using Go then all you’ll need is the pool.vmImage definition on #1-2 and to invoke the script we created earlier, I’m doing this on line #25 with ‘bash -f ./scripts/startminikube_ci.sh‘ then starting my integration tests with make integration (you can replace the make call with your tests).


pool:
vmImage: 'Ubuntu 16.04'
# Setup for GoLang, can skip for other languages
variables:
GOBIN: '$(GOPATH)/bin' # Go binaries path
GOROOT: '/usr/local/go1.11' # Go installation path
GOPATH: '$(system.defaultWorkingDirectory)/gopath' # Go workspace path
modulePath: '$(GOPATH)/src/github.com/$(build.repository.name)' # Path to the module's code
steps:
script: | # Setup for GoLang, can skip for other languages
mkdir -p '$(GOBIN)'
mkdir -p '$(GOPATH)/pkg'
mkdir -p '$(modulePath)'
shopt -s extglob
mv !(gopath) '$(modulePath)'
echo '##vso[task.prependpath]$(GOBIN)'
echo '##vso[task.prependpath]$(GOROOT)/bin'
displayName: 'Set up the Go workspace'
displayName: 'Build Go and Docker image'
script: bash -f ./scripts/installtools.sh && make # Setup for GoLang, can skip for other languages
workingDirectory: '$(modulePath)'
displayName: 'Run integration tests with Minikube'
script: bash -f ./scripts/startminikube_ci.sh && make integration
workingDirectory: '$(modulePath)'

To see how this all works together have a look at the repository hosting my controller code here.

If you want to build a docker image and then deploy it inside the MiniKube cluster there appears to be a way to do this too. I haven’t tried it but it would remove the need for a build to push the image to a repository before testing. If the test passes then the build can push and tag the image for others to use.

I’m pretty happy with the results and enjoyed working with Azure Pipelines for the first time.

As a point of interest I also got the same setup working with TravisCI (which has been my go-to CI for OSS projects in the past) to compare the two. Apart from a slightly different amount of time to start MiniKube (AzurePipelines: 3.38mins vs Travis 5.10mins) there was very little difference between them. The only one I noticed is that the Travis.yaml file may be a little more readable but this is pretty subjective. One this that I’m not making use of yet in Azure Pipelines, but does set them apart, is the ‘Release Management‘ you can tag on after a CI build.

 

 

 

Standard

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s