How to

Quick How to: Mount Azure Files Shares with Symlinks support on Ubuntu

By default mounting Azure File Shares on linux using CIFS doesn’t enable support for symlinks. You’ll see an error link this:

auser@acomputer:/media/shared$ ln -s linked -n t
ln: failed to create symbolic link 't': Operation not supported

So how do you fix this, simple? Simple add the following to the end of your CIFS mount command:

,mfsymlinks

So the command will look something like:

sudo mount -t cifs //<your-storage-acc-name>.file.core.windows.net/<storage-file-share-name> /media/shared -o vers=3.0,username=<storage-account-name>,password='<storage-account-key>',dir_mode=0777,file_mode=0777,mfsymlinks

So what does this do? Well you have to thank Steve French and Conrad Minshal. They defined a format for storing symlinks on SMB shares, an explanation of the format can be found here.

Thanks to renash for her comment (scroll to the bottom) which enabled me to find this, blog is to help others and give more details.

Standard
kubernetes

Kubernetes: Prevent Container from accessing Cluster API

I’ve recently been playing with Kubernetes as way to efficiently host my microservices. It’s great and I’m really enjoying it.

One thing that did take me by surprise is that Containers, by default, have credentials mounted which allow them to talk to the cluster API. This is incredibly useful for pulling information about other services, secrets or enabling self-orchestrating/scaling services, however, for the services that don’t need this it, it presents on opportunity for an attacker to escalate their privileges.

We tend to think about the containers (roughly) providing limited access to resources, information and preventing access to other containers but with this token the container is able to reach beyond the container. The mitigation is that, first, the attacker would have to compromise the application running in the container, once they had done this they could extract and use the token to control or pull information from your cluster.

As always, defense in depth is a good policy. We aim to prevent our container from being compromised but we ALSO aim to limit the damage if it is.

So here is how to prevent that service account being mounted into your containers which don’t require it:

apiVersion: v1
kind: Pod
metadata:
  name: my-pod
spec:
  serviceAccountName: build-robot
  automountServiceAccountToken: false
  ...

RBAC also presents an opportunity to have finer grained control over the level of access and is almost certainly something to consider too along with the use of namespaces.

How could an attacker get access?

Here is a really simple example of a badly written nodejs app which allows the users to pass in a version, this is used to find a file and the contents of the file is then returned to the user.

The k8s token is mounted at /var/run/secrets/kubernetes.io/serviceaccount

Using the ‘../’ syntax we can traverse out of the apps folder and down to the root then we can request the token, ca and namespace files from the service account.

This is simple example to show the files exist and can be leaked. Other vulnerabilities with remote code execution would allow the attacker to make requests to the cluster API using this token.

Summary

Kubernetes has a great architecture which lets containers talk to the cluster API but in some scenarios, when this access isn’t required, you’re likely to want to turn this off to provide defense in depth by the principal of least privileged.

 

Standard
Azure

Making Globally Distributed Systems easier with Leadership elections in .NetCore

Recently I’ve been looking at how to ensure services are always running globally across a number of data centres.

Why would you want to using Leadership elections to do this?

I’ll cover this at a very high level, for more detail look at articles like this, this I also highly recommend the chapter covering this in Google’s Site Reliability Engineering book.

I want the service to remain available during outages which affecting a particular node or DC.

To do this you have two options, active-active services, where all DC’s can serve all traffic, or active-passive, where one DC or Node is a master and the others are secondary’s. There is also a halfway house but we’ll leave that for now.

The active-active example is better for some scenarios but comes at a cost, data replication, race conditions and synchronisation require careful thought and management.

For a lot of services the idea of a master node can greatly simplify matters. If you have a cluster of 3 machines spread across 3 DCs and they consistently elect a single, healthy, master node which orchestrates requests – things can get nice and simple. You always have 1 node running as the master and it moves around as needed, in response it issues.

The code it runs can be written in a much simpler fashion (generally speaking). You will only ever have one node executing it, as the elected master, so concurrency concerns are removed. The master can be used to orchestrate what the secondaries are doing or simple process requests itself and only use a secondary if it fails. Developers can write, and more importantly test, in a more managable way.

Now how does this affect scaling a service? Well you can now partition to scale this approach. When your single master is getting top hot you can now split the load across two clusters. Each responsible for a partition, say split by user id. But we’ll leave this for another day.

So how do we do this in .Net?

Well we need a consensus system to handle the election. I chose to use an etcd cluster deployed in multiple DCs, others to consider are Consul and Zookeeper. Lets get into the code..

Continue reading

Standard