Apps, Azure, kubernetes

Magic, MutatingAdmissionsControllers and Kubernetes: Mutating pods created in your cluster

I recently wanted to use a Mutating Admissions Controller in Kubernetes to alter pods submitted to the cluster – here is a quick summary of how to do it.

In this case we wanted to change the image pull location, just as a quick example (I’m not sure this is a great idea in a real system as it introduces a single point of failure for pod creation but the sample code should be useful to others).

So how do they work? Well it’s super simple, you register a webhook in K8s which is called when a certain action occurs and you create a receiver which accepts that webhook and responds with a JSONPatch containing any changes you want to make.

Lets try it out, first up you’ll need ngrok, this creates a public endpoint for a port on your machine with an https cert and everything. We’ll use this for testing.

Lets start our webhook receiver locally.

  1. .ngrok http 3000
  2. ​git clone https://github.com/lawrencegripper/MutatingAdmissionsController and CD into the dir
  3. npm install && npm watch-server

Well you register a webhook in kubernetes which is called when certain things happen, in this case we register one to be called when a pod is created:

apiVersion: admissionregistration.k8s.io/v1beta1
kind: MutatingWebhookConfiguration
metadata:
  name: local-repository-controller-webhook
webhooks:
- clientConfig:
    # ngrok public cabundle
    caBundle: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tDQpNSUlGcXpDQ0JKT2dBd0lCQWdJUURITW1IVnRRYmdwZGRocWhDUllpbVRBTkJna3Foa2lHOXcwQkFRc0ZBREJlDQpNUXN3Q1FZRFZRUUdFd0pWVXpFVk1CTUdBMVVFQ2hNTVJHbG5hVU5sY25RZ1NXNWpNUmt3RndZRFZRUUxFeEIzDQpkM2N1WkdsbmFXTmxjblF1WTI5dE1SMHdHd1lEVlFRREV4UlNZWEJwWkZOVFRDQlNVMEVnUTBFZ01qQXhPREFlDQpGdzB4T0RBek1USXdNREF3TURCYUZ3MHhPVEF6TVRJeE1qQXdNREJhTUJVeEV6QVJCZ05WQkFNTUNpb3VibWR5DQpiMnN1YVc4d2dnRWlNQTBHQ1NxR1NJYjNEUUVCQVFVQUE0SUJEd0F3Z2dFS0FvSUJBUURwSDlJbDZrUU05Y3JPDQpaaVJxeTJQUlg4Y2VZY3IwODdLL2l4L3I1SFNKZ2p4TUN3OXM2RTBEQ3lsMGNla0h1TmwvN2FvT0YrQ2NKbktkDQo4aEtQVThwVXVsUjJNdTE0NGY2V1Vkb29wcFYrUTY0V0pyc0w2M0xnVy9kaVJNRmZLOWppU3BTa2VWdmIwRFNTDQpEQnE4UTlNZ2theklrZUhGM1JadGEzUjZCOG5SWFVZNW1uU2pjY1hja2pUQlJLRE9hblNhOSt3cXF1MG45QzF5DQpFVVdzbTNibktQMFI2RDdZNU0zRXNsdG5XN3ZRTEhPSndialQzVC9BM00vSnk0dGJKanVSNUhHTm9ydG1XVS84DQpTRXZ0KzRTbi84MHhhNUlkL3NrblZmd0ZlU3ZITzVscEQzVXZuY1UyajNoeUpqK05jYnUyQUxvRGFiMlFuVjZGDQpaUTErQ1YybkFnTUJBQUdqZ2dLc01JSUNxREFmQmdOVkhTTUVHREFXZ0JSVHloZFovR3ZBQXlFdkdxN2txcWdjDQpnbGJhZFRBZEJnTlZIUTRFRmdRVXJibTJzUXpoK1NmSGVCOEd0Y1RHOE9xS091c3dId1lEVlIwUkJCZ3dGb0lLDQpLaTV1WjNKdmF5NXBiNElJYm1keWIyc3VhVzh3RGdZRFZSMFBBUUgvQkFRREFnV2dNQjBHQTFVZEpRUVdNQlFHDQpDQ3NHQVFVRkJ3TUJCZ2dyQmdFRkJRY0RBakErQmdOVkhSOEVOekExTURPZ01hQXZoaTFvZEhSd09pOHZZMlJ3DQpMbkpoY0dsa2MzTnNMbU52YlM5U1lYQnBaRk5UVEZKVFFVTkJNakF4T0M1amNtd3dUQVlEVlIwZ0JFVXdRekEzDQpCZ2xnaGtnQmh2MXNBUUl3S2pBb0JnZ3JCZ0VGQlFjQ0FSWWNhSFIwY0hNNkx5OTNkM2N1WkdsbmFXTmxjblF1DQpZMjl0TDBOUVV6QUlCZ1puZ1F3QkFnRXdkUVlJS3dZQkJRVUhBUUVFYVRCbk1DWUdDQ3NHQVFVRkJ6QUJoaHBvDQpkSFJ3T2k4dmMzUmhkSFZ6TG5KaGNHbGtjM05zTG1OdmJUQTlCZ2dyQmdFRkJRY3dBb1l4YUhSMGNEb3ZMMk5oDQpZMlZ5ZEhNdWNtRndhV1J6YzJ3dVkyOXRMMUpoY0dsa1UxTk1VbE5CUTBFeU1ERTRMbU55ZERBSkJnTlZIUk1FDQpBakFBTUlJQkJBWUtLd1lCQkFIV2VRSUVBZ1NCOVFTQjhnRHdBSFlBdTluZnZCK0tjYldUbENPWHFwSjdSemhYDQpsUXFyVXVnYWtKWmtObzRlMFlVQUFBRmlIRWtybVFBQUJBTUFSekJGQWlCZFRxNXZrdHJqRzZDWjltK24yRFk3DQptVEdndWJNVWpESHBFY1hJMHgzSU53SWhBTVFnZ0VpT3JGUlh0WHc4VnlZZzRlNVpGUjJhbnRCakdnWE5tWXJCDQp2K24xQUhZQWIxTjJyREh3TVJuWW1RQ2tVUlgvZHhVY0Vka0N3UUFwQm8yeUNKbzMyUk1BQUFGaUhFa3IvZ0FBDQpCQU1BUnpCRkFpRUExUTRVNmVyQ3pPWVJ1OW55UFh6MnFWY2RnL3BwVVdPREVyNWNTbEdJeVJ3Q0lIM2o0ZWMzDQpDbXROaitaN0l1T3R3YjZwMmNPSGlrRHBvNDFWTmlvM3pBcm1NQTBHQ1NxR1NJYjNEUUVCQ3dVQUE0SUJBUUE1DQpCM25FZE5iU3huYmVrTUtHeFo3QlFoTi9uVVRiN0QraVMxMnIxK3BUL0VqdzhUUmEzdWVEWjdWcnNxU3AzaE1tDQpXMmYwMjJkanpocnhFSGkyYWJGb0VUS0Q1dUFSR1F2dDVML2h2bEhmZGtyZVMxSWZxK2YyUVllcU9zcVJlUUxHDQpOUTR6ekRXS1gxeTRBNGpQSi9uQ0diNk16UnlIUytHemhKMU50YnozNDJhQ2IxWER6aXBNSVpZUXhZTDFISjVTDQo5VzRYOGg2c2w4NDJXTmVvajBtYU10ZmVpajhURGlnUnJBUFE5anRYK29lOG1mdG4zOVVyK1ZvRC9FaTl3cWhMDQo1Rlh2WGR3MWxmT0RLNEpzd2JtTzdXMExwNzJnMDBHY3k5a1h1MUpoVkQvVDFFamh0NW92YjdIQ1VWYXkzT0plDQpKUE5Pemh6eTUzUzUzS0hiN0gvSQ0KLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQ0K
    url:  https://1eafed28.ngrok.io/pod
  failurePolicy: Fail
  name: 1eafed28.ngrok.io
  namespaceSelector: {}
  rules:
  - apiGroups:
    - ""
    apiVersions:
    - v1
    operations:
    - CREATE
    resources:
    - pods

 

When our simply Koa.js app, written in Typescript, receives the request it does the following:

  1. Clone the incoming pod spec into a new object
  2. Make changes to the clone, updating the image location
  3. Creates a JSONPatch by comparing the original and the clone
  4. Base64 Encodes the JSONPatch data
  5. Returns the patch as part of an `admissionResponse` object

The code is hopefully nice and simple to follow so take a look at it here. If you’d like a more complex example you can take a look at the golang code here in istio which uses a similar method to inject the istio sidecars  (This is what I read in order to write the Typescript example).

That’s it, nice and simple.

Note: ngrok approach won’t work in an Azure AKS cluster due to networking restrictions, you’ll need an ACE Engine cluster or other.. or you can test inside the cluster with the receive setup as a service but beware of circular references (pod can’t be created because CREATE calls webhook which is received by the pod which can’t be created).

Standard
How to, kubernetes, vscode

Autocomplete Kubernetes YAML files in VSCode

I’ve increasingly been working with Kubernetes and hence lots of YAML files.

It’s nice and easy to get autocomplete setup for the Kubernetes YAML using this awesome extension YAML Support by Red Hat

Setup:

  • Install the Extension
  • Add the following to your settings
    "yaml.schemas": {
      "Kubernetes": "*.yaml"
    }
    
  • Reload the editor

Here is me setting it up and showing it off:
vscodeyamlautocomplete3

Massive thanks to the team that worked on the extension and language server to support this!

Standard
kubernetes

Kubernetes: Prevent Container from accessing Cluster API

I’ve recently been playing with Kubernetes as way to efficiently host my microservices. It’s great and I’m really enjoying it.

One thing that did take me by surprise is that Containers, by default, have credentials mounted which allow them to talk to the cluster API. This is incredibly useful for pulling information about other services, secrets or enabling self-orchestrating/scaling services, however, for the services that don’t need this it, it presents on opportunity for an attacker to escalate their privileges.

We tend to think about the containers (roughly) providing limited access to resources, information and preventing access to other containers but with this token the container is able to reach beyond the container. The mitigation is that, first, the attacker would have to compromise the application running in the container, once they had done this they could extract and use the token to control or pull information from your cluster.

As always, defense in depth is a good policy. We aim to prevent our container from being compromised but we ALSO aim to limit the damage if it is.

So here is how to prevent that service account being mounted into your containers which don’t require it:

apiVersion: v1
kind: Pod
metadata:
  name: my-pod
spec:
  serviceAccountName: build-robot
  automountServiceAccountToken: false
  ...

RBAC also presents an opportunity to have finer grained control over the level of access and is almost certainly something to consider too along with the use of namespaces.

How could an attacker get access?

Here is a really simple example of a badly written nodejs app which allows the users to pass in a version, this is used to find a file and the contents of the file is then returned to the user.

The k8s token is mounted at /var/run/secrets/kubernetes.io/serviceaccount

Using the ‘../’ syntax we can traverse out of the apps folder and down to the root then we can request the token, ca and namespace files from the service account.

This is simple example to show the files exist and can be leaked. Other vulnerabilities with remote code execution would allow the attacker to make requests to the cluster API using this token.

Summary

Kubernetes has a great architecture which lets containers talk to the cluster API but in some scenarios, when this access isn’t required, you’re likely to want to turn this off to provide defense in depth by the principal of least privileged.

 

Standard