Apps, kubernetes

Mutating Admissions Controllers with Open Policy Agent and Rego

First up, quick refresher – what is a mutating admission controller?

Well it’s a nice feature in Kubernetes which lets you intercept objects when they’re created and make changes to them before they are deployed into the cluster.

Cool right? All those fiddly bits of YAML or hard to enforce company policies around network access, image stores you can and can’t use, they can all be enforced and FIXED automagically! (Like all magic caution is advised, choose wisely – queue Monty python gif)

giphy

So what’s the catch? Well without Open Policy Agent (OPA) you had to build out a web api to do the magic of changing the object then build/push an image and go through maintaining the solution. While you can write them quite easily now with solutions like KubeBuilder, or if you really love node I build one using that too, I wanted to see if OPA made things easier.

So say you want something more dynamic, flexible and a little easier to look after?

This is where Open Policy Agent comes in, they have a DSL language specially designed to build out and enforce complex policies.

Today I’ve been having a play with it to work out if I could build a controller which would set a certain nodeSelector on pods based on which namespace they are deployed in.

I’ll go over this very broadly I highly recommend looking at the docs in detail before diving in, I lost quite a bit of time to not reading things properly before starting.

I won’t lie, getting used to the DSL (rego) was painful for me, mainly because I came at it thinking it was going to be really like Golang. It does look quite like it but that’s where the similarity ends, it’s more functional/pattern matching and better suited to tersely making decisions based on data.

To counter the learning curve of rego I have to say, as I’ve raised issues and contributions the maintainers have been super responsive and helpful (even when I’ve made some silly mistakes) and the docs are great with runnable samples to get started.

Lets talk more about what I built out.

Continue reading

Standard
Azure, How to, kubernetes

Kubernetes Integration Testing: MiniKube + Azure Pipelines = Happy

I recently did some work on a fairly simple controller to run inside Kubernetes. It connects to the K8s API and watches for changes to ingress objects in the cluster.

I had a nice cluster spun up for testing which I could tweak and poke then observe the results. This was nice BUT I wanted to translate it into something that ran as part of my CI process to make it more repeatable. Having not played much with the new Azure Pipelines I decided to try and get this working using one.

Here was the goal:

    • Build the source for the controller
    • Spin up a Kuberentes cluster
    • Deploy test resources (Ingress and Services) into the cluster
    • Connect the controller code to the cluster and run it’s tests

The obvious choice was to look at creating the clusters inside a cloud provider and using it for testing but I wanted each PR/Branch to be validated independently in a separate cluster, ideally in parallel, so things get complicated and expensive if we go down that route.

Instead I worked with MiniKube which has a ‘no vm mode’, this spins up a whole cluster using just docker containers. The theory was, if the CI supports running docker containers it should support MiniKube clusters…

TLDR: Yes this is possible with MiniKube and Azure Pipelines or Travis CI – Skip to the end to see how.

Continue reading

Standard
How to, kubernetes, vscode

Autocomplete Kubernetes YAML files in VSCode

I’ve increasingly been working with Kubernetes and hence lots of YAML files.

It’s nice and easy to get autocomplete setup for the Kubernetes YAML using this awesome extension YAML Support by Red Hat

Setup:

  • Install the Extension
  • Add the following to your settings
    "yaml.schemas": {
      "Kubernetes": "*.yaml"
    }
    
  • Reload the editor

Here is me setting it up and showing it off:
vscodeyamlautocomplete3

Massive thanks to the team that worked on the extension and language server to support this!

Standard