How to

Writing Bash doesn’t have to be as painful as you think! Shellcheck to the rescue.

So I’ve found myself writing lots of bash scripts recently and because they tend to do real things to the file system or cloud services they’re hard to test… it’s painful.

neverfear

So it turns out there is an awesome linter/checker for bash called shellcheck which you can use to catch a lot of those gotchas before they become a problem.

There is a great plugin for vscode so you get instant feedback when you do something you shouldn’t.

Better still it’s easy to get running in your build pipeline to keep everyone honest. Here is an example task for Azure Devops to run it on all scripts in the ./scripts folder.


bash: |
echo "This checks for formatting and common bash errors. See wiki for error details and ignore options: https://github.com/koalaman/shellcheck/wiki/SC1000"
export scversion="stable"
wget -qO- "https://storage.googleapis.com/shellcheck/shellcheck-${scversion?}.linux.x86_64.tar.xz" | tar -xJv
sudo mv "shellcheck-${scversion}/shellcheck" /usr/bin/
rm -r "shellcheck-${scversion}"
shellcheck ./scripts/*.sh
displayName: "Validate Scripts: Shellcheck"

view raw

build.yaml

hosted with ❤ by GitHub

Next on my list is to play with the xunit inspired testing framework for bash called shunit2 but kinda feel if you have enough stuff to need tests you should probably be using python.

 

Standard
Azure, Coding, Uncategorized

Friends don’t let friends commit Terraform without fmt, linting and validation

So it starts out easy, you write a bit of terraform and all is going well then as more and more people start committing and the code is churning things start to get messy. Breaking commits block release, formatting isn’t consistent and and errors get repeated.

Seems a bit odd right, in the middle of your devops pipe which dutifully checks code passes tests and validation you just give terraform a free pass.

Captain Picard Quotes. QuotesGram

The good new is terraform has tools to help you out here and make life better!

Here is my rough script for running during build to detect and fail early on a host of terraform errors. It’s also pinning terraform to a set release (hopefully the same one you use when releasing to prod) and doing a terraform init each time to make sure you have providers pinned (if not the script fail when a provider ships breaking changes and give you an early heads up).

It’s rough and ready so make sure your happy with what it does before you give it a run. For an added bonus the docker command below the script runs it inside a Azure Devops container to emulate locally what should happen when you push.


#! /bin/bash
set -e
SCRIPT_DIR=$(dirname "$BASH_SOURCE")
cd "$SCRIPT_DIR"
echo -e "\n\n>>> Installing Terraform 0.12"
# Install terraform tooling for linting terraform
wget -q https://releases.hashicorp.com/terraform/0.12.4/terraform_0.12.4_linux_amd64.zip -O /tmp/terraform.zip
sudo unzip -q -o -d /usr/local/bin/ /tmp/terraform.zip
echo ""
echo -e "\n\n>>> Install tflint (3rd party)"
wget -q https://github.com/wata727/tflint/releases/download/v0.9.1/tflint_linux_amd64.zip -O /tmp/tflint.zip
sudo unzip -q -o -d /usr/local/bin/ /tmp/tflint.zip
echo -e "\n\n>>> Terraform verion"
terraform -version
echo -e "\n\n>>> Terraform Format (if this fails use 'terraform fmt' command to resolve"
terraform fmt -recursive -diff -check
echo -e "\n\n>>> tflint"
tflint
echo -e "\n\n>>> Terraform init"
terraform init
echo -e "\n\n>>> Terraform validate"
terraform validate

view raw

validate-tf.sh

hosted with ❤ by GitHub


docker run –rm -v ${PWD}:/source mcr.microsoft.com/azure-pipelines/vsts-agent:ubuntu-16.04-docker-18.06.1-ce-standard \
/source/deployment/validate-tf.sh

Optionally you can add args like -var java_functions_zip_file=something  to the terraform validate call.

Hope this helps as a quick rough guide!

Standard
Azure, How to

Using .img files on Azure Files to get full Linux filesystem compatibility

Here is the scenario, I wanted to use a container to run a linux app in Azure and I need to persist changes to the filesystem via Azure Files. This, initially, appears to be nice and simple – you mount the Azure File share using CIFS and then pass this into docker as a volume mount. But what if you app relies on some linux specific operations some, like symlinks, can be emulated see here for details, but others can’t.

This is a really simple little hack which I learnt from the Azure Cloudshell implementation. It uses the ‘.img’ format to store a full EXT2 filesystem on Azure files.

How does it work? Well if you open a cloudshell instance and use the ‘mount’ command you can see it has two mounts, one for CIFS and one for a loop0.

Seeing this started to peak my interest, what was the loop0 device mounting as my home directory? The cloudshell creates a storage account to persist your files between sessions, next I took a look at this to see what I could find.

imgmount2

This is when I found the ‘.img’ file being used. So how do we use this approach for ourselves, as it seems to work nicely for cloudshell?

It’s actually pretty simple, we mount the Azure file share with CIFS, create the ‘.img’ file in the CIFS share, format it and then mount the ‘.img’ file. Done.


#!/bin/sh
#!/bin/bash
# $1 = Azure storage account name
# $2 = Azure storage account key
# $3 = Azure file share name
# $4 = mountpoint path
mkdir -p $4
# CIFS settings from Azure CloudShell container which uses .img approach.
mount -t cifs //$1.file.core.windows.net/$3 $4 -o vers=2.1,username=$1,password=$2,sec=ntlmssp,cache=strict,domain=X,uid=0,noforceuid,gid=0,noforcegid,file_mode=0777,dir_mode=0777,nounix,serverino,mapposix,rsize=1048576,wsize=1048576,echo_interval=60,actimeo=1
mkdir -p $4/imgs/
SHAREIMGFILE=$SHAREPATH/imgs/containerstorage.img
dd if=/dev/zero of=$SHAREIMGFILE bs=1M count=0 seek=10G
mkfs ext2 -F $SHAREIMGFILE
mount -o rw,relatime,block_validity,barrier,user_xattr,acl $SHAREIMGFILE $4

view raw

init.sh

hosted with ❤ by GitHub

The key is to create a ‘img’ file which is sparse, meaning we don’t write all the empty space to file storage, otherwise creating a 10Gig ‘img’ file involves copying 10gig to Azure files. This is done by passing in the ‘seek’ command into dd on like 15.

So this gives you a fully compatible Linux disk stored on Azure files so it can persist container restarts and machine moves.

Ps. If you’re replicating super critical data dive this some extensive testing first, this approach worked nicely for my use case but do exercise caution.

Standard