Coding, How to

Easily Debugging Terraform Provider for Azure in VSCode

So you’re making a change to the provider to add a feature, it’s going great and your ready to test it out…. but then you realize things get a bit ropey… ideally you want a visual debugger to step through the code.

Well here is how to set that up in VSCode.

First make sure you have VSCode setup for golang debugging (delve configured etc). Then it’s easy, say you want to debug a new provider you’ve written and it has a test called TestAccDataSourceAzureRMFunction_basic in the file data_source_function_test.go then you can setup your launch.json file in VSCode to look like:

(Makes sure you replace the details in the private.env with your service principal and subscription details)

{
// Use IntelliSense to learn about possible attributes.
// Hover to view descriptions of existing attributes.
// For more information, visit: https://go.microsoft.com/fwlink/?linkid=830387
"version": "0.2.0",
"configurations": [
{
"name": "Launch test function",
"type": "go",
"request": "launch",
"mode": "test",
"program": "${workspaceRoot}/azurerm/data_source_function_test.go",
"args": [
"-test.v",
"-test.run",
"TestAccDataSourceAzureRMFunction_basic"
],
"envFile": "${workspaceRoot}/.vscode/private.env",
"showLog": true
},
]
}

view raw
launch.json
hosted with ❤ by GitHub

GOFLAGS='-mod=vendor'
ARM_CLIENT_ID=00000000-0000-0000-0000-000000000000
ARM_CLIENT_SECRET=00000000-0000-0000-0000-000000000000
ARM_SUBSCRIPTION_ID=00000000-0000-0000-0000-000000000000
ARM_TENANT_ID=00000000-0000-0000-0000-000000000000
ARM_TEST_LOCATION=northeurope
ARM_TEST_LOCATION_ALT=westeurope
TF_ACC=1
TF_LOG=DEBUG

view raw
private.env
hosted with ❤ by GitHub

 

Standard
How to

Writing Bash doesn’t have to be as painful as you think! Shellcheck to the rescue.

So I’ve found myself writing lots of bash scripts recently and because they tend to do real things to the file system or cloud services they’re hard to test… it’s painful.

neverfear

So it turns out there is an awesome linter/checker for bash called shellcheck which you can use to catch a lot of those gotchas before they become a problem.

There is a great plugin for vscode so you get instant feedback when you do something you shouldn’t.

Better still it’s easy to get running in your build pipeline to keep everyone honest. Here is an example task for Azure Devops to run it on all scripts in the ./scripts folder.

bash: |
echo "This checks for formatting and common bash errors. See wiki for error details and ignore options: https://github.com/koalaman/shellcheck/wiki/SC1000"
export scversion="stable"
wget -qO- "https://storage.googleapis.com/shellcheck/shellcheck-${scversion?}.linux.x86_64.tar.xz" | tar -xJv
sudo mv "shellcheck-${scversion}/shellcheck" /usr/bin/
rm -r "shellcheck-${scversion}"
shellcheck ./scripts/*.sh
displayName: "Validate Scripts: Shellcheck"

view raw
build.yaml
hosted with ❤ by GitHub

Next on my list is to play with the xunit inspired testing framework for bash called shunit2 but kinda feel if you have enough stuff to need tests you should probably be using python.

 

Standard
Azure, How to, kubernetes

Kubernetes Integration Testing: MiniKube + Azure Pipelines = Happy

Update: With the release of KIND (Kubernetes in Docker) I’ve now moved to using this over minikube as it’s quicker and simpler.

I recently did some work on a fairly simple controller to run inside Kubernetes. It connects to the K8s API and watches for changes to ingress objects in the cluster.

I had a nice cluster spun up for testing which I could tweak and poke then observe the results. This was nice BUT I wanted to translate it into something that ran as part of my CI process to make it more repeatable. Having not played much with the new Azure Pipelines I decided to try and get this working using one.

Here was the goal:

    • Build the source for the controller
    • Spin up a Kuberentes cluster
    • Deploy test resources (Ingress and Services) into the cluster
    • Connect the controller code to the cluster and run it’s tests

The obvious choice was to look at creating the clusters inside a cloud provider and using it for testing but I wanted each PR/Branch to be validated independently in a separate cluster, ideally in parallel, so things get complicated and expensive if we go down that route.

Instead I worked with MiniKube which has a ‘no vm mode’, this spins up a whole cluster using just docker containers. The theory was, if the CI supports running docker containers it should support MiniKube clusters…

TLDR: Yes this is possible with MiniKube and Azure Pipelines or Travis CI – Skip to the end to see how.

Continue reading

Standard