Azure, Coding, Uncategorized

Friends don’t let friends commit Terraform without fmt, linting and validation

So it starts out easy, you write a bit of terraform and all is going well then as more and more people start committing and the code is churning things start to get messy. Breaking commits block release, formatting isn’t consistent and and errors get repeated.

Seems a bit odd right, in the middle of your devops pipe which dutifully checks code passes tests and validation you just give terraform a free pass.

Captain Picard Quotes. QuotesGram

The good new is terraform has tools to help you out here and make life better!

Here is my rough script for running during build to detect and fail early on a host of terraform errors. It’s also pinning terraform to a set release (hopefully the same one you use when releasing to prod) and doing a terraform init each time to make sure you have providers pinned (if not the script fail when a provider ships breaking changes and give you an early heads up).

It’s rough and ready so make sure your happy with what it does before you give it a run. For an added bonus the docker command below the script runs it inside a Azure Devops container to emulate locally what should happen when you push.

#! /bin/bash
set -e
echo -e "\n\n>>> Installing Terraform 0.12"
# Install terraform tooling for linting terraform
wget -q -O /tmp/
sudo unzip -q -o -d /usr/local/bin/ /tmp/
echo ""
echo -e "\n\n>>> Install tflint (3rd party)"
wget -q -O /tmp/
sudo unzip -q -o -d /usr/local/bin/ /tmp/
echo -e "\n\n>>> Terraform verion"
terraform -version
echo -e "\n\n>>> Terraform Format (if this fails use 'terraform fmt' command to resolve"
terraform fmt -recursive -diff -check
echo -e "\n\n>>> tflint"
echo -e "\n\n>>> Terraform init"
terraform init
echo -e "\n\n>>> Terraform validate"
terraform validate

view raw
hosted with ❤ by GitHub

docker run –rm -v ${PWD}:/source \

view raw
hosted with ❤ by GitHub

Optionally you can add args like -var java_functions_zip_file=something  to the terraform validate call.

Hope this helps as a quick rough guide!

Azure, How to, kubernetes

Kubernetes Integration Testing: MiniKube + Azure Pipelines = Happy

Update: With the release of KIND (Kubernetes in Docker) I’ve now moved to using this over minikube as it’s quicker and simpler.

I recently did some work on a fairly simple controller to run inside Kubernetes. It connects to the K8s API and watches for changes to ingress objects in the cluster.

I had a nice cluster spun up for testing which I could tweak and poke then observe the results. This was nice BUT I wanted to translate it into something that ran as part of my CI process to make it more repeatable. Having not played much with the new Azure Pipelines I decided to try and get this working using one.

Here was the goal:

    • Build the source for the controller
    • Spin up a Kuberentes cluster
    • Deploy test resources (Ingress and Services) into the cluster
    • Connect the controller code to the cluster and run it’s tests

The obvious choice was to look at creating the clusters inside a cloud provider and using it for testing but I wanted each PR/Branch to be validated independently in a separate cluster, ideally in parallel, so things get complicated and expensive if we go down that route.

Instead I worked with MiniKube which has a ‘no vm mode’, this spins up a whole cluster using just docker containers. The theory was, if the CI supports running docker containers it should support MiniKube clusters…

TLDR: Yes this is possible with MiniKube and Azure Pipelines or Travis CI – Skip to the end to see how.

Continue reading

Apps, Azure, kubernetes

Magic, MutatingAdmissionsControllers and Kubernetes: Mutating pods created in your cluster

I recently wanted to use a Mutating Admissions Controller in Kubernetes to alter pods submitted to the cluster – here is a quick summary of how to do it.

In this case we wanted to change the image pull location, just as a quick example (I’m not sure this is a great idea in a real system as it introduces a single point of failure for pod creation but the sample code should be useful to others).

So how do they work? Well it’s super simple, you register a webhook in K8s which is called when a certain action occurs and you create a receiver which accepts that webhook and responds with a JSONPatch containing any changes you want to make.

Lets try it out, first up you’ll need ngrok, this creates a public endpoint for a port on your machine with an https cert and everything. We’ll use this for testing.

Lets start our webhook receiver locally.

  1. .ngrok http 3000
  2. ​git clone and CD into the dir
  3. npm install && npm watch-server

Well you register a webhook in kubernetes which is called when certain things happen, in this case we register one to be called when a pod is created:

kind: MutatingWebhookConfiguration
  name: local-repository-controller-webhook
- clientConfig:
    # ngrok public cabundle
  failurePolicy: Fail
  namespaceSelector: {}
  - apiGroups:
    - ""
    - v1
    - CREATE
    - pods


When our simply Koa.js app, written in Typescript, receives the request it does the following:

  1. Clone the incoming pod spec into a new object
  2. Make changes to the clone, updating the image location
  3. Creates a JSONPatch by comparing the original and the clone
  4. Base64 Encodes the JSONPatch data
  5. Returns the patch as part of an `admissionResponse` object

The code is hopefully nice and simple to follow so take a look at it here. If you’d like a more complex example you can take a look at the golang code here in istio which uses a similar method to inject the istio sidecars  (This is what I read in order to write the Typescript example).

That’s it, nice and simple.

Note: ngrok approach won’t work in an Azure AKS cluster due to networking restrictions, you’ll need an ACE Engine cluster or other.. or you can test inside the cluster with the receive setup as a service but beware of circular references (pod can’t be created because CREATE calls webhook which is received by the pod which can’t be created).