This is a quick one, if you get the following error or similar:
2021/06/07 00:15:35 ERROR : Attempt 3/3 failed with 1 errors and: corrupted on transfer: sizes differ 189118 vs 130560 2021/06/07 00:15:35 Failed to copy: corrupted on transfer: sizes differ 189118 vs 130560
These track back to an issue with OneDrives metadata generation altering the size of the file. You can see details on this issue here: https://github.com/rclone/rclone/issues/399
To resolve this you need to add --ignore-size to the rclone config that TrueNas creates.
Brain dump post, excuse typos and writing getting this out of my head while I still remember it.
In this post I’m going to go through how I used query based parameters to setup an Application Insights workbook so it does not have hard coded resource ID’s in it’s definition.
This means it’s much easier to use for automated deployments where these ID’s aren’t known and as new resources are deployed the existing workbook picks these up automatically without requiring manual changes.
In this scenario we’re deploying a Resource Group with ARM/Terraform. Each group has it’s own Application Insights deployed. In the group are some App Service plans, Cosmos DBs and Service Bus Namespaces. We want the workbook to deploy into the Application insights instance in the group and graph the resources for App Service Plans, Cosmos and ServiceBus.
I’m new to workbooks so be aware there may be a simpler way to do this that I’ve not yet found!
First up what does it look like if you don’t use this approach and just use the GUI to add metrics to your workbook.
Along the top we use the “Resource Type” and other drop downs to select the resource we want to graph then we add our “CPU” metric.
If you click into the “Advanced editor” you’ll see the following, notice that the “resourceIds” field now has a hard coded reference to the resource we selected.
This means if we exported this JSON and deployed it using ARM or Terraform the workbook wouldn’t work. We’d want it to graph the metrics for the resource deployed alongside it not the resource that is hard coded.
So how do we fix this?
Well we can use workbook parameters.
Parameters can be simple strings or more complex queries and resource selectors.
The first step is to find out the resource group we’re deployed into, this can be done by creating a parameter which finds the “Owned Resources”.
“Owned Resources” for this Application Insights instance is itself and the query returns it’s full Azure ID like: /subscriptions/YOURSUB/resourceGroups/rg-processing/providers/Microsoft.Insights/components/app-insights
We’re going to use this to extract the current resource group’s name.
Next we use the an Azure Resource Graph query where id == "{OwningAppInsights}" | project split(id, "/", 4)[0] to pull the Resource Group name out of this ID.
The query finds the application insights instance then pulls out the Resource Group it’s deployed into (this is what the split is doing on the Azure ID). We add this as a new parameter called “ResourceGroup” notice this param can depend on the previous param “OwningAppInsights” we just created.
Now we can create our last workbook parameter, one which selects the App Service Plans in the resource group. This uses the output of the “ResourceGroup” parameter above to query for all the Plans in the group by filter the “type” of the resources in the group.
To find out which type you should use in the above query run the following Azure Graph Query and review the results (note turn “formatted results” off to see the original values not the cleaned up ones) where resourceGroup == "processing-myrg" | project type, name
So the query where resourceGroup == "{ResourceGroup}" and type == "microsoft.web/serverfarms" is returning all the resources that are servicefarms… this is internal azure speak for App Service Plans.
We’ve ticked the box to “Allow multiple selections” and we’ve ticked “Hide parameter in reading mode” as we don’t want users of the workbook to change this manually.
Then we can use this parameter when setting up our metric graph like so, we can select the “ResourceApplicationPlans” parameter from the drop down and the graph now uses our auto-populated set of App Service plans.
Now we’re they’re the code/json of the workbook no longer contains any hard coded references to ID’s
You can see the “ResourceIds” is now set by out “ResourceApplicationPlans” parameter which is dynamically generated and selects all the App Plans that are deployed in the resource group the workbook is deployed in.
We can now automate the deployment of the workbook without templating the json!
Bonus if you add a new App Plan the workbook will pick it up and start graphing it. You can use the same approach to add parameters detecting other resource types like cosmos and graph those too.
As we needed VNET integration for the sensitive data handled on the project, I set out to build an ACI like experience on a VM and have that connected to a VNET in Azure.
Handle restarting the container if things go wrong
Give an easy way to retrieve the container logs from the commandline
Connect reliably to the VNET
Support updating easily (ie. When I push a new image tag the container is restarted running the new version or when new environment variables are applied handle restarting the container to pick these up)
Support authentication to an Azure Container Repository
Be runnable as part of a Terraform Deployment
Seems like a pretty long list right? At this point I reached out to a friend, Marcus Robison, who’d done more Windows Admin than me in his time. He suggested looking at Powershell DSC.
So what is Powershell DSC, what does it give us?
Desired State Configuration for the VM. “I want a VM that looks like x” and it makes that happen. Much like Terraform or a K8s operator. It queries the current state and takes actions that move the current state closer to the desired state
Integration with Azure VMs. There is a nice extension in Azure which allows you to submit a DSC config and Azure manages starting it on the VM for you.
Handling of sensitive variables securely, with the Azure extension variables are encrypted
What does this all look like when you have it finished?
Each Script (think resource in terraform) has a Get, Set and Test method. Test checks the current state of things, if they’re not how they’re meant to be Set is responsible for getting them configured correctly and lastly Get returns an identifier for the item.
module.tf
This is the terraform responsible for creating the VM and pushing up the DSC script for it to run.
It takes the dsc_config.ps1 and creates a zip file, this zip is then passed to the PowerShell DSC extension for the Azure VM which is responsible for applying the configuration to the VM.
As well as this the module also takes the environment variables you want set for your container. These are provided as a map and converted to a base64 encoded .env file. The DSC config on the VM decodes them and provides the .env file to the docker run command used to start the container.
*Worth nothing env.tpl is used in the process of creating the env file.
usage.tf
This is an example of using the terraform module from module.tf to create a VM which runs a container image on a VNET with a set of environment variables.
getlogs.ps1
Once deployed this little script demonstrates how you can get the logs out from the container running in the VM. It requires the Azure CLI to be installed and you to provide the VM’s Azure ID.
You can also hook this up to the outputs of your Terraform to automate it further.
All together now!
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters