Azure, How to

Using .img files on Azure Files to get full Linux filesystem compatibility

Here is the scenario, I wanted to use a container to run a linux app in Azure and I need to persist changes to the filesystem via Azure Files. This, initially, appears to be nice and simple – you mount the Azure File share using CIFS and then pass this into docker as a volume mount. But what if you app relies on some linux specific operations some, like symlinks, can be emulated see here for details, but others can’t.

This is a really simple little hack which I learnt from the Azure Cloudshell implementation. It uses the ‘.img’ format to store a full EXT2 filesystem on Azure files.

How does it work? Well if you open a cloudshell instance and use the ‘mount’ command you can see it has two mounts, one for CIFS and one for a loop0.

Seeing this started to peak my interest, what was the loop0 device mounting as my home directory? The cloudshell creates a storage account to persist your files between sessions, next I took a look at this to see what I could find.

imgmount2

This is when I found the ‘.img’ file being used. So how do we use this approach for ourselves, as it seems to work nicely for cloudshell?

It’s actually pretty simple, we mount the Azure file share with CIFS, create the ‘.img’ file in the CIFS share, format it and then mount the ‘.img’ file. Done.

The key is to create a ‘img’ file which is sparse, meaning we don’t write all the empty space to file storage, otherwise creating a 10Gig ‘img’ file involves copying 10gig to Azure files. This is done by passing in the ‘seek’ command into dd on like 15.

So this gives you a fully compatible Linux disk stored on Azure files so it can persist container restarts and machine moves.

Ps. If you’re replicating super critical data dive this some extensive testing first, this approach worked nicely for my use case but do exercise caution.

Standard
Azure

Making Globally Distributed Systems easier with Leadership elections in .NetCore

Recently I’ve been looking at how to ensure services are always running globally across a number of data centres.

Why would you want to using Leadership elections to do this?

I’ll cover this at a very high level, for more detail look at articles like this, this I also highly recommend the chapter covering this in Google’s Site Reliability Engineering book.

I want the service to remain available during outages which affecting a particular node or DC.

To do this you have two options, active-active services, where all DC’s can serve all traffic, or active-passive, where one DC or Node is a master and the others are secondary’s. There is also a halfway house but we’ll leave that for now.

The active-active example is better for some scenarios but comes at a cost, data replication, race conditions and synchronisation require careful thought and management.

For a lot of services the idea of a master node can greatly simplify matters. If you have a cluster of 3 machines spread across 3 DCs and they consistently elect a single, healthy, master node which orchestrates requests – things can get nice and simple. You always have 1 node running as the master and it moves around as needed, in response it issues.

The code it runs can be written in a much simpler fashion (generally speaking). You will only ever have one node executing it, as the elected master, so concurrency concerns are removed. The master can be used to orchestrate what the secondaries are doing or simple process requests itself and only use a secondary if it fails. Developers can write, and more importantly test, in a more managable way.

Now how does this affect scaling a service? Well you can now partition to scale this approach. When your single master is getting top hot you can now split the load across two clusters. Each responsible for a partition, say split by user id. But we’ll leave this for another day.

So how do we do this in .Net?

Well we need a consensus system to handle the election. I chose to use an etcd cluster deployed in multiple DCs, others to consider are Consul and Zookeeper. Lets get into the code..

Continue reading

Standard
Azure, Coding, How to

Fixing ASPNET Production Issues by adding custom data to App Insights logs

Debugging issues in production is hard. Things vary – seemingly identical requests to the same URL could be made but one succeed and the other fail.

Without information about the context and configuration it’s hard to isolate issue, here is a quick way to get more of that context.

The key is having the data to understand the cause of the failures. One of the things we do to help that is creating our own ITelemetryInitializer for application insights.

In this we track which environment the request was made in, the cloud instance handling the request, code version, UserId and lots more.

This means, if an issue occurs, we have a wealth of data to understand and debug the issue. It’s embedded in every telemetry event tracked.

Here is an example of a Telemetry Initializer which adds the UserID and Azure WebApps Instance Id to tracked events, if a user is signed in. You’ll find, once you start using this you’ll add more information over time.

To get it setup register it, like this, in your Startup.cs:

TelemetryConfiguration.Active.TelemetryInitializers.Add(new
AppInsightsTelemetryInitializer());

Go do it now, seriously you’ll need it later!

Standard